Another nice article. Gustav says most of the things that I wanted to say. A couple other things:
I think LELO with discounting is going to violate Pareto. Suppose that by default Amy is going to be born first with welfare 98 and then Bobby is going to be born with welfare 100. Suppose that you can do something which harms Amy (so her welfare is 97) and harms Bobby (so his welfare is 99). But also suppose that this harming switches the birth order: now Bobby is born first and Amy is born later. Given the right discount-rate, LELO will advocate doing the harming, because it means making good lives happen earlier. Is that right?
I think a minor reframing of Harsanyi’s veil-of-ignorance makes it more compelling as an argument for utilitarianism. Not only is it the case that doing the utilitarian thing maximises the decision-maker’s expected welfare behind the veil-of-ignorance, doing the utilitarian thing maximises everyone’s expected welfare behind the veil-of-ignorance. So insofar as aggregativism departs from utilitarianism, it means doing what would be worse in expecation for everyone behind a veil-of-ignorance.
Yep, Pareto is violated, though how severely it’s violated is limited by human psychology.
For example, in your Alice/Bob scenario, would I desire a lifetime of 98 utils then 100 utils over a lifetime with 99 utils then 97 utils? Maybe idk, I don’t really understand these abstract numbers very much, which is part of the motivation for replacing them entirely with personal outcomes. But I can certainly imagine I’d take some offer like this, violating pareto. On the plus side, humans are not so imprudent to accept extreme suffering just to reshuffle different experiences in their life.
Secondly, recall that the model of human behaviour is a free variable in the theory. So to ensure higher conformity to pareto, we could…
Train the model (if it’s implemented as a neural network) to increase delayed gratification.
Remove the permutation-dependence using some idealisation procedure.
But these techniques (1 < 2 < 3) will result in increasingly “alien” optimisers. So there’s a trade-off between (1) avoiding human irrationalities and (2) robustness to ‘going off the rails’. (See Section 3.1.) I see realistic typical human behaviour on one extreme of the tradeoff, and argmax on the other.
Another nice article. Gustav says most of the things that I wanted to say. A couple other things:
I think LELO with discounting is going to violate Pareto. Suppose that by default Amy is going to be born first with welfare 98 and then Bobby is going to be born with welfare 100. Suppose that you can do something which harms Amy (so her welfare is 97) and harms Bobby (so his welfare is 99). But also suppose that this harming switches the birth order: now Bobby is born first and Amy is born later. Given the right discount-rate, LELO will advocate doing the harming, because it means making good lives happen earlier. Is that right?
I think a minor reframing of Harsanyi’s veil-of-ignorance makes it more compelling as an argument for utilitarianism. Not only is it the case that doing the utilitarian thing maximises the decision-maker’s expected welfare behind the veil-of-ignorance, doing the utilitarian thing maximises everyone’s expected welfare behind the veil-of-ignorance. So insofar as aggregativism departs from utilitarianism, it means doing what would be worse in expecation for everyone behind a veil-of-ignorance.
Yep, Pareto is violated, though how severely it’s violated is limited by human psychology.
For example, in your Alice/Bob scenario, would I desire a lifetime of 98 utils then 100 utils over a lifetime with 99 utils then 97 utils? Maybe idk, I don’t really understand these abstract numbers very much, which is part of the motivation for replacing them entirely with personal outcomes. But I can certainly imagine I’d take some offer like this, violating pareto. On the plus side, humans are not so imprudent to accept extreme suffering just to reshuffle different experiences in their life.
Secondly, recall that the model of human behaviour is a free variable in the theory. So to ensure higher conformity to pareto, we could…
Use the behaviour of someone with high delayed gratification.
Train the model (if it’s implemented as a neural network) to increase delayed gratification.
Remove the permutation-dependence using some idealisation procedure.
But these techniques (1 < 2 < 3) will result in increasingly “alien” optimisers. So there’s a trade-off between (1) avoiding human irrationalities and (2) robustness to ‘going off the rails’. (See Section 3.1.) I see realistic typical human behaviour on one extreme of the tradeoff, and argmax on the other.