Nice article. I think it’s a mistake for Harsanyi to argue for average utilitarianism. The view has some pretty counterintuitive implications:
Suppose we have a world in which one person is living a terrible life, represented by a welfare level of −100. Average utilitarianism implies that we can make that world better by making the person’s life even more terrible (-101) and adding a load of people with slightly-less terrible lives (-99).
Suppose I’m considering having a child. Average utilitarianism implies that I have to do research in Egyptology to figure out whether having a child is permissible.[1] That seems counterintuitive.
On a natural way of extending average utilitarianism to risky prospects, the view can oppose the interests of all affected individuals. See Gustafsson and Spears:
If the ancient Egyptians were very happy, my child would bring down the average, and so having the child would be wrong. If the ancient Egyptians were unhappy, my child would bring up the average, and so having the child would be right.
I do prefer total utilitarianism to average utilitarianism,[1] but one thing that pulls me to average utilitarianism is the following case.
Let’s suppose Alice can choose either (A) create 1 copy at 10 utils, or (B) create 2 copies at 9 utils. Then average utilitarianism endorses (A), and total utilitarianism endorses (B). Now, if Alice knows she’s been created by a similar mechanism, and her option is correlated with the choice of her ancestor, and she hasn’t yet learned her own welfare, then EDT endorses picking (A). So that matches average utilitarianism.[2]
Basically, you’d be pleased to hear that all your ancestors were average utility maximisers, rather than total utility maximisers, once you “update on your own existence” (whatever that means). But also, I’m pretty confused by everything in this anthropics/decision theory/population ethics area. Like, the egyptology thing seems pretty counterintuitive, but acausal decision theories and anthropic considerations imply all kind of weird nonlocal effects, so idk if this is excessively fishy.
Yeah I think correlations and EDT can make things confusing. But note that average utilitarianism can endorse (B) given certain background populations. For example, if the background population is 10 people each at 1 util, then (B) would increase the average more than (A).
Nice article. I think it’s a mistake for Harsanyi to argue for average utilitarianism. The view has some pretty counterintuitive implications:
Suppose we have a world in which one person is living a terrible life, represented by a welfare level of −100. Average utilitarianism implies that we can make that world better by making the person’s life even more terrible (-101) and adding a load of people with slightly-less terrible lives (-99).
Suppose I’m considering having a child. Average utilitarianism implies that I have to do research in Egyptology to figure out whether having a child is permissible.[1] That seems counterintuitive.
On a natural way of extending average utilitarianism to risky prospects, the view can oppose the interests of all affected individuals. See Gustafsson and Spears:
If the ancient Egyptians were very happy, my child would bring down the average, and so having the child would be wrong. If the ancient Egyptians were unhappy, my child would bring up the average, and so having the child would be right.
I do prefer total utilitarianism to average utilitarianism,[1] but one thing that pulls me to average utilitarianism is the following case.
Let’s suppose Alice can choose either (A) create 1 copy at 10 utils, or (B) create 2 copies at 9 utils. Then average utilitarianism endorses (A), and total utilitarianism endorses (B). Now, if Alice knows she’s been created by a similar mechanism, and her option is correlated with the choice of her ancestor, and she hasn’t yet learned her own welfare, then EDT endorses picking (A). So that matches average utilitarianism.[2]
Basically, you’d be pleased to hear that all your ancestors were average utility maximisers, rather than total utility maximisers, once you “update on your own existence” (whatever that means). But also, I’m pretty confused by everything in this anthropics/decision theory/population ethics area. Like, the egyptology thing seems pretty counterintuitive, but acausal decision theories and anthropic considerations imply all kind of weird nonlocal effects, so idk if this is excessively fishy.
I think aggregative principles are generally better than utilitarian ones. I’m a fan of LELO in particular, which is roughly somewhere between total and average utilitarianism, leaning mostly to the former.
Maybe this also requires SSA??? Not sure.
Yeah I think correlations and EDT can make things confusing. But note that average utilitarianism can endorse (B) given certain background populations. For example, if the background population is 10 people each at 1 util, then (B) would increase the average more than (A).