I don’t understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
Here’s another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
assuming they don’t have any causal effects on other people
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.
Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more.
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, “Because in this world, more people suffer headaches.”
I don’t understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don’t think that such a world would be better, then you must agree that average utilitarianism is false.
Here’s another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.
I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human—as in, in pop A you will get utility −10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward “maximize expected util of ‘being someone in this world’”, or something like “suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being’s utility”. Such perspectives would give the “non-intuitive” result in these sorts of thought experiments.
Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?
Perhaps people simply objected to the implied selfish motivations.
Perhaps! Though I certainly didn’t intend to imply that this was a selfish calculation—one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.
Once you make such an unrealistic assumption, the conclusions won’t necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.
When discussing such questions, we need to be careful to distinguish the following:
Is a world containing population B better than a world containing population A?
If a world with population A already existed, would it be moral to turn it into a world with population B?
If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I’d live somewhere in the world, but not who I’d be, would I choose population B?
I am inclined to give different answers to these questions. Similarly for Parfit’s repugnant conclusion; the exact phrasing of the question could lead to different answers.
Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.
I suspect that this is the situation we’re actually in: a large, maybe infinite, population elsewhere that we can’t do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth’s population, and we can’t make a judgement one way or another.
While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.
Both worlds contain people “suffering horribly”.
One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.
So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!
Because it doesn’t contain anyone else. There’s only one human left and they’re “suffering horribly”.
Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, “Because in this world, more people suffer headaches.”
What would you conclude about my sanity?
Most people value humanity’s continued existence.