Environmental preservationists… er, no, I won’t try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I’m estimating the probability that this is the case at… significantly less than 50%.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Just so long as they don’t force any other minds to experience pain.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
Like many others here, I subscribe to emotivism as well as utilitarianism.
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.
Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, “Bambi Lovers versus Tree Huggers: A Critique of Rolston”s Environmental Ethics”: “Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.”
Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.
Yes, that’s the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that’s precisely why we’re having this conversation, as well as why SIAI’s research is so important. :)
I hope so. Of course, it’s not as though the only two possibilities are “CEV” or “extinction.” There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political “realist” scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.
If you include paperclippers or suffering-maximizers in your definition of “anyone,” then I’d put the probability close to 0%. If “anyone” just includes humans, I’d still put it less than, say, 10^-3.
Yeah, although if we take the perspective that individuals are different people over time (a “person” is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to “forcing someone” to feel pain....
That is inconsistent. Utilitarianism has to assume there’s a fact about the good; otherwise, what are you maximizing? Emotivism insists that there is not a fact about the good. For example, for an emotivist, “You should not have stolen the bread.” expresses the exact same factual content as “You stole the bread.” (On this view, presumably, indicating “mere disapproval” doesn’t count as factual information).
Sure. Then what I meant was that I’m an emotivist with a strong desire to see suffering reduced and pleasure increased in the manner that a utilitarian would advocate, and I feel a deep impulse to do what I can to help make that happen. I don’t think utilitarianism is “true” (I don’t know what that could possibly mean), but I want to see it carried out.
checking out the wikipedia article… hmm… I think I agree with emotivism too, to some degree. I already have a habit of saying “but that’s just my opinion”, and being uncertain enough about the validity (validity according to what?) of my preferences, to not dare to enforce them if other people disagree. And emotivism seems like a formalization of the “but that’s just my opinion”. That could be useful.
good point. and yeah, that’s that’s one of the main issues that’s causing me to doubt whether SIAI has any hope of achieving their mission.
good point. Have you had any contact with Metafire yet? He strongly agrees with you on this. Just recently he started posting to LW.
oh, and “quixotic”, that’s the word I was looking for, thanks :)
heh, yeah, that “significantly less than 50%” was actually meant as an extremely sarcastic understatement. I need to learn how to express stuff like this more clearly.
good point! This suggests the possibility of requiring people to go through regular mental health checkups after the Singularity. Preferably as unobtrusively as possible. Giving them a chance to release themselves from any restrictions they tried to place on their future selves. Though the question of what qualifies as “mentally healthy” is… complex and controversial.