It seems to me there’s a pretty strong correlation between philosophical competence and endorsement of utilitarian (vs egoist) values, and also that most who endorse egoist values do so because they’re confused about e.g. various issues around personal identity and the difference between pursuing one’s self-interest and following one’s own goals.
Can we taboo utilitarian since nobody ever seems to be able to agree what it means? Also, do you have any references to strong arguments for whatever you mean by utilitarianism? I’ve yet to encounter any good arguments in favour of it but given how many apparently intelligent people seem to consider themselves utilitarians they presumably exist somewhere.
Utility is just a basic way to describe “happiness” (or, if you prefer, “preferences”) in an economic context. Sometimes the measurement of utility is a utilon. To say you are a Utilitarian just means that you’d prefer an outcome that results in the largest total number of utilons over tthe human population. (Or in the universe, if you allow for Babyeaters, Clippies, Utility Monsters, Super Happies ,
and so on.)
Alicorn, who I think is more of an expert on this topic than most, had this to say:
I’m taking an entire course called “Weird Forms of Consequentialism”, so please clarify—when you say “utilitarianism”, do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.
Depending on how you define ‘philosophical competence’ the results of the PhilPapers survey may be relevant.
The PhilPapers Survey was a survey of professional philosophers and others on their philosophical views, carried out in November 2009. The Survey was taken by 3226 respondents, including 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students.
Here are the stats for Philosophy Faculty or PhD, All Respondents
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 558 / 1803 (30.9%) Accept or lean toward: consequentialism 435 / 1803 (24.1%) Accept or lean toward: virtue ethics 406 / 1803 (22.5%) Accept or lean toward: deontology 404 / 1803 (22.4%)
And for Philosophy Faculty or PhD, Area of Specialty Normative Ethics
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 80 / 274 (29.1%) Accept or lean toward: deontology 78 / 274 (28.4%) Accept or lean toward: consequentialism 66 / 274 (24%) Accept or lean toward: virtue ethics 50 / 274 (18.2%)
As utilitarianism is a subset of consequentialism it appears you could conclude that utilitarians are a minority in this sample.
Unfortunately the survey doesn’t directly address the main distinction in the original post since utilitarianism and egoism are both forms of consequentialism.
It seems to me there’s a pretty strong correlation between philosophical competence and endorsement of utilitarian (vs egoist) values, and also that most who endorse egoist values do so because they’re confused about e.g. various issues around personal identity and the difference between pursuing one’s self-interest and following one’s own goals.
Can we taboo utilitarian since nobody ever seems to be able to agree what it means? Also, do you have any references to strong arguments for whatever you mean by utilitarianism? I’ve yet to encounter any good arguments in favour of it but given how many apparently intelligent people seem to consider themselves utilitarians they presumably exist somewhere.
Utility is just a basic way to describe “happiness” (or, if you prefer, “preferences”) in an economic context. Sometimes the measurement of utility is a utilon. To say you are a Utilitarian just means that you’d prefer an outcome that results in the largest total number of utilons over tthe human population. (Or in the universe, if you allow for Babyeaters, Clippies, Utility Monsters, Super Happies , and so on.)
Alicorn, who I think is more of an expert on this topic than most, had this to say:
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.
I haven’t seen any stats on that issue. Is there any evidence relating to the topic?
Depending on how you define ‘philosophical competence’ the results of the PhilPapers survey may be relevant.
Here are the stats for Philosophy Faculty or PhD, All Respondents
And for Philosophy Faculty or PhD, Area of Specialty Normative Ethics
As utilitarianism is a subset of consequentialism it appears you could conclude that utilitarians are a minority in this sample.
Thanks! For perspective:
http://en.wikipedia.org/wiki/Consequentialism#Varieties_of_consequentialism
Unfortunately the survey doesn’t directly address the main distinction in the original post since utilitarianism and egoism are both forms of consequentialism.