Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
assuming they don’t have any causal effects on other people
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps people simply objected to the implied selfish motivations.
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.