Why? And, more importantly, why should he care? It’s in his interest to have the FAI follow his extrapolated volition, not his revealed preference, be it in the form of his own belief about his extrapolated volition or not.
Because the power of moral philosophy to actually change things like the desire for status is limited, even in very intelligent individuals interested in moral philosophy. The hypothesis that thinking much faster, knowing much more, etc, will radically change that has little empirical support, and no strong non-empirical arguments to produce an extreme credence.
When we are speaking about what to do with the world, which is what formal preference (extrapolated volition) is ultimately about, this is different in character (domain of application) from any heuristics that a human person has for what he personally should be doing. Any human consequentialist is a hopeless dogmatic deontologist in comparison with their personal FAI. Even if we take both views as representations of the same formal object, syntactically they have little in common. We are not comparing what a human will do with what advice that human will give to himself if he knew more. Extrapolated volition is a very different kind of wish, a kind of wish that can’t be comprehended by a human, and so no heuristics already in mind will resemble heuristics about that wish.
But you seem to have the heuristic that the extrapolated volition of even the most evil human “won’t be that bad”. Where does that come from?
That’s not a heuristic in the sense I use the word in the comment above, it’s (rather weakly) descriptive of a goal and not rules for achieving it.
The main argument (and I changed my mind on this recently) is the same as for why another normal human’s preference isn’t that bad: sympathy. If human preference has a component of sympathy, of caring about other human-like persons’ preferences, then there is always a sizable slice of the control of the universe pie going to everyone’s preference, even if orders of magnitude smaller than for the preference in control. I don’t expect that even the most twisted human can have a whole aspect of preference completely absent, even if manifested to smaller degree than usual.
This apparently changes my position on the danger of value drift, and modifying minds of uploads in particular. Even though we will lose preference to the value drift, we won’t lose it completely, so long as people holding the original preference persist.
I don’t expect that even the most twisted human can have a whole aspect of preference completely absent, even if manifested to smaller degree than usual.
Humans also have other preferences that are in conflict with sympathy, for example the desire to see one’s enemies suffer. If sympathy is manifested to a sufficiently small degree, then it won’t be enough to override those other preferences.
It seems to me there’s a pretty strong correlation between philosophical competence and endorsement of utilitarian (vs egoist) values, and also that most who endorse egoist values do so because they’re confused about e.g. various issues around personal identity and the difference between pursuing one’s self-interest and following one’s own goals.
Can we taboo utilitarian since nobody ever seems to be able to agree what it means? Also, do you have any references to strong arguments for whatever you mean by utilitarianism? I’ve yet to encounter any good arguments in favour of it but given how many apparently intelligent people seem to consider themselves utilitarians they presumably exist somewhere.
Utility is just a basic way to describe “happiness” (or, if you prefer, “preferences”) in an economic context. Sometimes the measurement of utility is a utilon. To say you are a Utilitarian just means that you’d prefer an outcome that results in the largest total number of utilons over tthe human population. (Or in the universe, if you allow for Babyeaters, Clippies, Utility Monsters, Super Happies ,
and so on.)
Alicorn, who I think is more of an expert on this topic than most, had this to say:
I’m taking an entire course called “Weird Forms of Consequentialism”, so please clarify—when you say “utilitarianism”, do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.
Depending on how you define ‘philosophical competence’ the results of the PhilPapers survey may be relevant.
The PhilPapers Survey was a survey of professional philosophers and others on their philosophical views, carried out in November 2009. The Survey was taken by 3226 respondents, including 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students.
Here are the stats for Philosophy Faculty or PhD, All Respondents
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 558 / 1803 (30.9%) Accept or lean toward: consequentialism 435 / 1803 (24.1%) Accept or lean toward: virtue ethics 406 / 1803 (22.5%) Accept or lean toward: deontology 404 / 1803 (22.4%)
And for Philosophy Faculty or PhD, Area of Specialty Normative Ethics
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 80 / 274 (29.1%) Accept or lean toward: deontology 78 / 274 (28.4%) Accept or lean toward: consequentialism 66 / 274 (24%) Accept or lean toward: virtue ethics 50 / 274 (18.2%)
As utilitarianism is a subset of consequentialism it appears you could conclude that utilitarians are a minority in this sample.
Unfortunately the survey doesn’t directly address the main distinction in the original post since utilitarianism and egoism are both forms of consequentialism.
Why? And, more importantly, why should he care? It’s in his interest to have the FAI follow his extrapolated volition, not his revealed preference, be it in the form of his own belief about his extrapolated volition or not.
Because the power of moral philosophy to actually change things like the desire for status is limited, even in very intelligent individuals interested in moral philosophy. The hypothesis that thinking much faster, knowing much more, etc, will radically change that has little empirical support, and no strong non-empirical arguments to produce an extreme credence.
When we are speaking about what to do with the world, which is what formal preference (extrapolated volition) is ultimately about, this is different in character (domain of application) from any heuristics that a human person has for what he personally should be doing. Any human consequentialist is a hopeless dogmatic deontologist in comparison with their personal FAI. Even if we take both views as representations of the same formal object, syntactically they have little in common. We are not comparing what a human will do with what advice that human will give to himself if he knew more. Extrapolated volition is a very different kind of wish, a kind of wish that can’t be comprehended by a human, and so no heuristics already in mind will resemble heuristics about that wish.
But you seem to have the heuristic that the extrapolated volition of even the most evil human “won’t be that bad”. Where does that come from?
That’s not a heuristic in the sense I use the word in the comment above, it’s (rather weakly) descriptive of a goal and not rules for achieving it.
The main argument (and I changed my mind on this recently) is the same as for why another normal human’s preference isn’t that bad: sympathy. If human preference has a component of sympathy, of caring about other human-like persons’ preferences, then there is always a sizable slice of the control of the universe pie going to everyone’s preference, even if orders of magnitude smaller than for the preference in control. I don’t expect that even the most twisted human can have a whole aspect of preference completely absent, even if manifested to smaller degree than usual.
This apparently changes my position on the danger of value drift, and modifying minds of uploads in particular. Even though we will lose preference to the value drift, we won’t lose it completely, so long as people holding the original preference persist.
Humans also have other preferences that are in conflict with sympathy, for example the desire to see one’s enemies suffer. If sympathy is manifested to a sufficiently small degree, then it won’t be enough to override those other preferences.
Are you aware of what has been happening in Congo, for example?
It seems to me there’s a pretty strong correlation between philosophical competence and endorsement of utilitarian (vs egoist) values, and also that most who endorse egoist values do so because they’re confused about e.g. various issues around personal identity and the difference between pursuing one’s self-interest and following one’s own goals.
Can we taboo utilitarian since nobody ever seems to be able to agree what it means? Also, do you have any references to strong arguments for whatever you mean by utilitarianism? I’ve yet to encounter any good arguments in favour of it but given how many apparently intelligent people seem to consider themselves utilitarians they presumably exist somewhere.
Utility is just a basic way to describe “happiness” (or, if you prefer, “preferences”) in an economic context. Sometimes the measurement of utility is a utilon. To say you are a Utilitarian just means that you’d prefer an outcome that results in the largest total number of utilons over tthe human population. (Or in the universe, if you allow for Babyeaters, Clippies, Utility Monsters, Super Happies , and so on.)
Alicorn, who I think is more of an expert on this topic than most, had this to say:
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.
I haven’t seen any stats on that issue. Is there any evidence relating to the topic?
Depending on how you define ‘philosophical competence’ the results of the PhilPapers survey may be relevant.
Here are the stats for Philosophy Faculty or PhD, All Respondents
And for Philosophy Faculty or PhD, Area of Specialty Normative Ethics
As utilitarianism is a subset of consequentialism it appears you could conclude that utilitarians are a minority in this sample.
Thanks! For perspective:
http://en.wikipedia.org/wiki/Consequentialism#Varieties_of_consequentialism
Unfortunately the survey doesn’t directly address the main distinction in the original post since utilitarianism and egoism are both forms of consequentialism.