all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents)
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.
The answer is the aggregate of some function for all suitable agents, but that function needn’t itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.
Ah, I was equating preference utilitarianism with utilitarianism.
I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don’t apply given EY’s metaethics. It may be worth sticking to the terminology despite the cost though.