Really? Preference utilitarianism prevails on Less Wrong? I haven’t been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Generally, I doubt that many people on less wrong believe that the universe has an “inherent” moral property, so I suppose your guess is accurate. However, there is a fairly strong emphasis on (trans)humanistic ideas. There doesn’t have to be a term for “rightness” in the equations of quantum mechanics. That simply doesn’t matter. Humans care. However, often humanity’s moral instincts cause us to actually hurt the world more than we help it. That’s why Eliezer tells us so frequently to shut up and multiply.
I would predict, based on human nature, that a if the 3^^^3 people were asked if they wanted to inflict a dust speck in each one of their eyes, in exchange for not torturing another individual for 50 years, they would probably vote for dust specks.
I would probably go for specks here. Many people, I predict, are going to get an emotional high out of thinking that their sacrifice will prevent someone from being tortured (while scope insensitivity prevents them from realizing just how small 1/3^^^3 is). If they actually receive a dust speck in the eye in a short enough time interval afterwards, I think that they would have a significant chance of taking it to mean that specks was chosen instead of torture (and thus they would get more of a high after being specked, if they knew why.)
If you polled people in such a way that they wouldn’t get the high, then your answer should be the same as before. Scope insensitivity still kicks in, and people will vote without understanding how big 3^^^3 is (I don’t understand it myself—the description of ‘a 1 followed by a string of zeroes as long as the Bible’ is just so mind-numbingly big I know I can’t comprehend it.)
You can (sort of) be both. I remember there being at least one person on Felicifia.org who said he figures there is no morality, so he might as well maximize happiness.
I’m not a morality anti-realist, but I’m not all that sure of it, and I’m pretty sure I’d stay a Utilitarian anyway.
That said, I’m emphatically not a preference utilitarian. Preferences are a map. Reality is a territory. On a fundamental physical level, they are incomparable.
I understood DanielLC’s acquaintance as meaning that he prefers to maximize happiness and he believes that his only reasons for action are his preferences and (if it existed) morality, so in the absence of morality he will act solely according to his preferences, which are to maximize happiness.
And, sure, if his preferences had been for entering a monastery, or maximizing unhappiness, or moonwalking in a clown suit, then in the absence of morality he would act solely according to those preferences.
I doubt very much that any of these accounts actually describe a real person, and I would be very nervous around anyone they did describe, but none of this is senseless.
Really? Preference utilitarianism prevails on Less Wrong? I haven’t been around too long, but I would have guessed that moral anti-realism (in several forms) prevailed.
Isn’t this a confusion of levels, with preference utilitarianism being an ethical theory, and moral anti-realism being a metaethical theory?
I would like to hope not.
Generally, I doubt that many people on less wrong believe that the universe has an “inherent” moral property, so I suppose your guess is accurate. However, there is a fairly strong emphasis on (trans)humanistic ideas. There doesn’t have to be a term for “rightness” in the equations of quantum mechanics. That simply doesn’t matter. Humans care. However, often humanity’s moral instincts cause us to actually hurt the world more than we help it. That’s why Eliezer tells us so frequently to shut up and multiply.
I would probably go for specks here. Many people, I predict, are going to get an emotional high out of thinking that their sacrifice will prevent someone from being tortured (while scope insensitivity prevents them from realizing just how small 1/3^^^3 is). If they actually receive a dust speck in the eye in a short enough time interval afterwards, I think that they would have a significant chance of taking it to mean that specks was chosen instead of torture (and thus they would get more of a high after being specked, if they knew why.)
If you polled people in such a way that they wouldn’t get the high, then your answer should be the same as before. Scope insensitivity still kicks in, and people will vote without understanding how big 3^^^3 is (I don’t understand it myself—the description of ‘a 1 followed by a string of zeroes as long as the Bible’ is just so mind-numbingly big I know I can’t comprehend it.)
Gotcha.
You can (sort of) be both. I remember there being at least one person on Felicifia.org who said he figures there is no morality, so he might as well maximize happiness.
I’m not a morality anti-realist, but I’m not all that sure of it, and I’m pretty sure I’d stay a Utilitarian anyway.
That said, I’m emphatically not a preference utilitarian. Preferences are a map. Reality is a territory. On a fundamental physical level, they are incomparable.
What.
I’m going to say that the first statement doesn’t really seem to make much more sense than the others.
I understood DanielLC’s acquaintance as meaning that he prefers to maximize happiness and he believes that his only reasons for action are his preferences and (if it existed) morality, so in the absence of morality he will act solely according to his preferences, which are to maximize happiness.
And, sure, if his preferences had been for entering a monastery, or maximizing unhappiness, or moonwalking in a clown suit, then in the absence of morality he would act solely according to those preferences.
I doubt very much that any of these accounts actually describe a real person, and I would be very nervous around anyone they did describe, but none of this is senseless.
I’d start that I might as well maximize my happiness, and then figure that it would make more sense to maximize happiness in general.