I think that’s what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.
I find it hard to believe that you believe that. Under that metric, for example, “pick a thousand happy people and kill their dogs” is a completely neutral act, along with lots of other extremely strange results.
So then, we disregard everyone who isn’t affected by the possible action and maximize over the utilities of those who are.
But still, this prefers a million people being punched once to any one person being punched twice, which seems silly—I’m just trying to parse out my intuition for choosing dust specks.
I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
Or, for a maybe more dramatic instance: “Find the world’s unhappiest person and kill them”. Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world’s unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world—including the soon-to-be-ex-unhappiest-person—is extremely happy and very much wishes to go on living.
The specific problem which causes that is that most versions of utilitarianism don’t allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it’s completely morally OK to do very bad things to huge numbers of people—in fact, it’s no worse than radically improving huge numbers of lives—as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.
You can attempt to mitigate this property with too-clever objections, like “aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all”. I don’t think that actually works, but didn’t want it to obscure the point, so I picked “kill their dog” as an example, because it’s a clearly bad thing which definitely doesn’t bump anyone to the bottom.
I find it hard to believe that you believe that. Under that metric, for example, “pick a thousand happy people and kill their dogs” is a completely neutral act, along with lots of other extremely strange results.
Oh, good point, maybe a kind of alphabetical ordering could break ties.
So then, we disregard everyone who isn’t affected by the possible action and maximize over the utilities of those who are.
But still, this prefers a million people being punched once to any one person being punched twice, which seems silly—I’m just trying to parse out my intuition for choosing dust specks.
I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
Or, for a maybe more dramatic instance: “Find the world’s unhappiest person and kill them”. Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world’s unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world—including the soon-to-be-ex-unhappiest-person—is extremely happy and very much wishes to go on living.
The specific problem which causes that is that most versions of utilitarianism don’t allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it’s completely morally OK to do very bad things to huge numbers of people—in fact, it’s no worse than radically improving huge numbers of lives—as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.
You can attempt to mitigate this property with too-clever objections, like “aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all”. I don’t think that actually works, but didn’t want it to obscure the point, so I picked “kill their dog” as an example, because it’s a clearly bad thing which definitely doesn’t bump anyone to the bottom.