Alicorn, who I think is more of an expert on this topic than most, had this to say:
I’m taking an entire course called “Weird Forms of Consequentialism”, so please clarify—when you say “utilitarianism”, do you speak here of direct, actual-consequence, evaluative, hedonic, maximizing, aggregative, total, universal, equal, agent-neutral consequentialism?
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.
Alicorn, who I think is more of an expert on this topic than most, had this to say:
Just the other day I debated with PhilGoetz whether utilitarianism is supposed to imply agent-neutrality or not. I still don’t know what most people mean on that issue.
Even assuming agent neutrality there is a major difference between average and total utilitarianism. Then there are questions about whether you weight agents equally or differently based on some criteria. The question of whether/how to weight animals or other non-human entities is a subset of that question.
Given all these questions it tells me very little about what ethical system is being discussed when someone uses the word ‘utilitarian’.
It does substantially reduce the decision space. For example, it is generally a safe-bet that the individual is not going to subscribe to deontological claims that say “killing humans is always bad.” I’d thus be very surprised to ever meet a pacifist utilitarian.
It probably is fair to say that given the space of ethical systems generally discussed on LW, talking about utilitarianism doesn’t narrow the field down much from that space.