Sounds like total preference-utiltiarianism, instead of total hedonistic utilitarianism. Would this view imply that it is good to create beings whose preferences are satisfied? If yes, then it’s total PU. If no, then it might be prior-existence PU. The original article doesn’t specify explicitly whether it means hedonistic or preference utilitarianism, but the example given about killing only works for hedonistic utilitarianism, that’s why I assumed that this is what’s meant. However, somewhere else in the article, it says
Total utilitarianism is defined as maximising the sum of everyone’s individual utility function.
And that seems more like preference-utilitarianism again. So something doesn’t work out here.
As a side note, I’ve actually never encountered a total preference-utilitarian, only prior-existence ones (like Peter Singer). But it’s a consistent position.
But it’s not preference utilitarianism. In evaluating whether someone leads a good life, I care about whether they’re happy, and I care about whether their preferences are satisfied, but those aren’t the only things I care about. For example, I might think it’s a bad thing if a person lives the same day over and over again, even it’s what the person wants and it makes the person happy. (Of course, it’s a small step from there to concluding it’s a bad idea when different people have the same experiences, and that sort of value is hard to incorporate into any total utilitarian framework.)
I think you might want to not call your ethical theory utilitarianism. Aquinas’ ethics also emphasize the importance of the common welfare and loving thy neighbor as thyself, yet AFAIK no one calls his ethics utilitarian.
I think maybe the purest statement of utilitarianism is that it pursues “the greatest good for the greatest number”. The word “for” is important here. Something that improves your quality of life is good for you. Clippy might think (issues of rigid designators in metaethics aside) that paperclips are good without having a concept of whether they’re good for anyone, so he’s a consequentialist but not a utilitarian. An egoist has a concept of things being good for people, but chooses only those things that are good for himself, not for the greatest number; so an egoist is also a consequentialist but not a utilitarian. But there’s a pretty wide range of possible concepts of what’s good for an individual, and I think that entire range should be compatible with the term “utilitarian”.
It doesn’t make sense to me to count maximization of total X as “utilitarianism” if X is pleasure or if X is preference satisfaction but not if X is some other measure of quality of life. It doesn’t seem like that would cut reality at the joints. I don’t necessarily hold the position I described, but I think most criticisms of it are misguided, and it’s natural enough to deserve a short name.
I see, interesting. That means you bring in a notion independent of both the person’s experiences and preferences. You bring in a particular view on value (e.g. that life shouldn’t be repetitious). I’d just call this a consequentialist theory where the exact values would have to be specified in the description, instead of utilitarianism. But that’s just semantics, as you said initially, it’s important that we specify what exactly we’re talking about.
Not unless the value of a life is proportional to the extent to which the person’s preferences are satisfied.
What would you call the view I mentioned, if not total utilitarianism?
Sounds like total preference-utiltiarianism, instead of total hedonistic utilitarianism. Would this view imply that it is good to create beings whose preferences are satisfied? If yes, then it’s total PU. If no, then it might be prior-existence PU. The original article doesn’t specify explicitly whether it means hedonistic or preference utilitarianism, but the example given about killing only works for hedonistic utilitarianism, that’s why I assumed that this is what’s meant. However, somewhere else in the article, it says
And that seems more like preference-utilitarianism again. So something doesn’t work out here.
As a side note, I’ve actually never encountered a total preference-utilitarian, only prior-existence ones (like Peter Singer). But it’s a consistent position.
But it’s not preference utilitarianism. In evaluating whether someone leads a good life, I care about whether they’re happy, and I care about whether their preferences are satisfied, but those aren’t the only things I care about. For example, I might think it’s a bad thing if a person lives the same day over and over again, even it’s what the person wants and it makes the person happy. (Of course, it’s a small step from there to concluding it’s a bad idea when different people have the same experiences, and that sort of value is hard to incorporate into any total utilitarian framework.)
I think you might want to not call your ethical theory utilitarianism. Aquinas’ ethics also emphasize the importance of the common welfare and loving thy neighbor as thyself, yet AFAIK no one calls his ethics utilitarian.
I think maybe the purest statement of utilitarianism is that it pursues “the greatest good for the greatest number”. The word “for” is important here. Something that improves your quality of life is good for you. Clippy might think (issues of rigid designators in metaethics aside) that paperclips are good without having a concept of whether they’re good for anyone, so he’s a consequentialist but not a utilitarian. An egoist has a concept of things being good for people, but chooses only those things that are good for himself, not for the greatest number; so an egoist is also a consequentialist but not a utilitarian. But there’s a pretty wide range of possible concepts of what’s good for an individual, and I think that entire range should be compatible with the term “utilitarian”.
It doesn’t make sense to me to count maximization of total X as “utilitarianism” if X is pleasure or if X is preference satisfaction but not if X is some other measure of quality of life. It doesn’t seem like that would cut reality at the joints. I don’t necessarily hold the position I described, but I think most criticisms of it are misguided, and it’s natural enough to deserve a short name.
I see, interesting. That means you bring in a notion independent of both the person’s experiences and preferences. You bring in a particular view on value (e.g. that life shouldn’t be repetitious). I’d just call this a consequentialist theory where the exact values would have to be specified in the description, instead of utilitarianism. But that’s just semantics, as you said initially, it’s important that we specify what exactly we’re talking about.