As far as I can tell from the minimal information in that link, preference utilitarianism still involves summing/averaging/weighting utility across all persons. The ‘preference’ part of ‘preference utilitarianism’ refers to the fact that it is people’s ‘preferences’ that determine their individual utility but the ‘utilitarianism’ part still implies summing/averaging/weighting across persons. The link mentions Peter Singer as the leading contemporary advocate of preference utilitarianism and as I understand it he is still a utilitarian in that sense.
‘Maximizing expected utility over outcomes’ is just a description of how to make optimal decisions given a utility function. It is agnostic about what that utility function should be. Utilitarianism as a moral/ethical philosophy generally seems to advocate a choice of utility function that uses a unique weighting across all individuals as the definition of what is morally/ethically ‘right’.
You could be right. I can’t see mention of “averaging” or “summing” in the definitions (which! it matters!) - and if any sum is to be performed it is vague about what class of entities is being summed over. However—as you say—Singer is a “sum” enthusiast. How you can measure “satisfaction” in a way that can be added up over multiple people is left as a mystery for readers.
I wouldn’t assert the second paragraph, though. Satisfying preferences is still a moral philosophy—regardless of whether those preferences belong to an individual agent, or whether preference satisfaction is summed over a group.
Both concepts equally allow for agents with arbitrary preferences.
Utilitarianism is the idea that the moral worth of an action is determined solely by its utility in providing happiness or pleasure as summed among all people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.
Utilitarianism is often described by the phrase “the greatest good for the greatest number of people”, and is also known as “the greatest happiness principle”. Utility, the good to be maximized, has been defined by various thinkers as happiness or pleasure (versus suffering or pain), although preference utilitarians define it as the satisfaction of preferences.
Where ‘preference utilitarians’ links back to the short page on preference utilitarianism you referenced. That combined with the description of Peter Singer as the most prominent advocate for preference utilitarianism suggests weighted summing or averaging, though I’m not clear whether there is some specific procedure associated with ‘preference utilitarianism’.
Merely satisfying your own preferences is a moral philosophy but it’s not utilitarianism. Ethical Egoism maybe or just hedonism. What appears to distinguish utilitarian ethics is that they propose a unique utility function that globally defines what is moral/ethical for all agents.
As far as I can tell from the minimal information in that link, preference utilitarianism still involves summing/averaging/weighting utility across all persons. The ‘preference’ part of ‘preference utilitarianism’ refers to the fact that it is people’s ‘preferences’ that determine their individual utility but the ‘utilitarianism’ part still implies summing/averaging/weighting across persons. The link mentions Peter Singer as the leading contemporary advocate of preference utilitarianism and as I understand it he is still a utilitarian in that sense.
‘Maximizing expected utility over outcomes’ is just a description of how to make optimal decisions given a utility function. It is agnostic about what that utility function should be. Utilitarianism as a moral/ethical philosophy generally seems to advocate a choice of utility function that uses a unique weighting across all individuals as the definition of what is morally/ethically ‘right’.
You could be right. I can’t see mention of “averaging” or “summing” in the definitions (which! it matters!) - and if any sum is to be performed it is vague about what class of entities is being summed over. However—as you say—Singer is a “sum” enthusiast. How you can measure “satisfaction” in a way that can be added up over multiple people is left as a mystery for readers.
I wouldn’t assert the second paragraph, though. Satisfying preferences is still a moral philosophy—regardless of whether those preferences belong to an individual agent, or whether preference satisfaction is summed over a group.
Both concepts equally allow for agents with arbitrary preferences.
The main Wikipedia entry for Utilitarianism says:
Where ‘preference utilitarians’ links back to the short page on preference utilitarianism you referenced. That combined with the description of Peter Singer as the most prominent advocate for preference utilitarianism suggests weighted summing or averaging, though I’m not clear whether there is some specific procedure associated with ‘preference utilitarianism’.
Merely satisfying your own preferences is a moral philosophy but it’s not utilitarianism. Ethical Egoism maybe or just hedonism. What appears to distinguish utilitarian ethics is that they propose a unique utility function that globally defines what is moral/ethical for all agents.