You’ve already decided where to put zero when you say this:
If zero is very high, then total utility maximization = it’s kill everyone.
That means that zero is the utility of not existing. Granted, it’s a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying “kill anyone whose utility is less than zero” you’re defining zero utility as the utility of a dead person.
Also,
If zero is average utility, then total utility maximization = doesn’t matter what you do
does not make sense to me. Utility is relative, yes, but it’s relative to states of the universe, not to other people. If average utility is currently zero, and then, let’s say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don’t magically lose utility when I happen to gain some. Total utility doesn’t renormalize in the way you seem to think it does.
If zero is very low, then total utility maximization = make as many people as possible
Repugnant conclusion certainly is worth discussing, but the other two:
If zero is very high, then total utility maximization = it’s kill everyone
I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don’t think that’s too awful of a requirement.
If zero is average utility, then total utility maximization = doesn’t matter what you do
This looks like a total non sequitur to me. What do you mean?
He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.
So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.
It gets far worse when you try to apply it to animals.
As for zero being very high, I’ve actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn’t exist. It can as easily be applied to wild animals, even though it’s far less common to do so.
With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.
If non-existent beings have exactly zero utility—that any being with less than zero utility ought not to have come into existence—then the choice of where to put zero is clearly not arbitrary.
Total utility has obvious problem—it’s only meaningful to talk about relative utilities so where do we put zero? (as it’s completely arbitrary)
If zero is very low, then total utility maximization = make as many people as possible
If zero is very high, then total utility maximization = it’s kill everyone
If zero is average utility, then total utility maximization = doesn’t matter what you do
None of the three make any sense whatsoever.
You’ve already decided where to put zero when you say this:
That means that zero is the utility of not existing. Granted, it’s a lot easier to compare two different possible lives than it is to compare a possible life to that life not coming into existence, but by saying “kill anyone whose utility is less than zero” you’re defining zero utility as the utility of a dead person.
Also,
does not make sense to me. Utility is relative, yes, but it’s relative to states of the universe, not to other people. If average utility is currently zero, and then, let’s say, I recover from an illness than has been causing me distress, then my personal utility has increased, and average utility is no longer zero. Other people don’t magically lose utility when I happen to gain some. Total utility doesn’t renormalize in the way you seem to think it does.
Repugnant conclusion certainly is worth discussing, but the other two:
I think it would be a very bad idea to have a utility function such that the utility of an empty universe is higher than the utility of a populated non-dystopia; so any utility function for the universe that I might approve should have a pretty hefty negative value for empty universes. I don’t think that’s too awful of a requirement.
This looks like a total non sequitur to me. What do you mean?
He means that if utility is measured in such a way that average utility is always zero, then total utility is always zero too, average utility being total utility divided number of agents.
Well, that’s not a very good utility function then, and taw’s three possibilities are nowhere near exhausting the range of possibilities.
So where do you put zero? By this one completely arbitrary decision you can collapse total utility maximization to one of these cases.
It gets far worse when you try to apply it to animals.
As for zero being very high, I’ve actually heard many times this argument about existence of farm animals, which supposedly suffer so much that it would be better if they didn’t exist. It can as easily be applied to wild animals, even though it’s far less common to do so.
With animal zero very low, total utility maximization turns us into paperclip maximizer of insects, or whatever is the simplest utility-positive life.
If non-existent beings have exactly zero utility—that any being with less than zero utility ought not to have come into existence—then the choice of where to put zero is clearly not arbitrary.