Yep, Pareto is violated, though how severely it’s violated is limited by human psychology.
For example, in your Alice/Bob scenario, would I desire a lifetime of 98 utils then 100 utils over a lifetime with 99 utils then 97 utils? Maybe idk, I don’t really understand these abstract numbers very much, which is part of the motivation for replacing them entirely with personal outcomes. But I can certainly imagine I’d take some offer like this, violating pareto. On the plus side, humans are not so imprudent to accept extreme suffering just to reshuffle different experiences in their life.
Secondly, recall that the model of human behaviour is a free variable in the theory. So to ensure higher conformity to pareto, we could…
Train the model (if it’s implemented as a neural network) to increase delayed gratification.
Remove the permutation-dependence using some idealisation procedure.
But these techniques (1 < 2 < 3) will result in increasingly “alien” optimisers. So there’s a trade-off between (1) avoiding human irrationalities and (2) robustness to ‘going off the rails’. (See Section 3.1.) I see realistic typical human behaviour on one extreme of the tradeoff, and argmax on the other.
Yep, Pareto is violated, though how severely it’s violated is limited by human psychology.
For example, in your Alice/Bob scenario, would I desire a lifetime of 98 utils then 100 utils over a lifetime with 99 utils then 97 utils? Maybe idk, I don’t really understand these abstract numbers very much, which is part of the motivation for replacing them entirely with personal outcomes. But I can certainly imagine I’d take some offer like this, violating pareto. On the plus side, humans are not so imprudent to accept extreme suffering just to reshuffle different experiences in their life.
Secondly, recall that the model of human behaviour is a free variable in the theory. So to ensure higher conformity to pareto, we could…
Use the behaviour of someone with high delayed gratification.
Train the model (if it’s implemented as a neural network) to increase delayed gratification.
Remove the permutation-dependence using some idealisation procedure.
But these techniques (1 < 2 < 3) will result in increasingly “alien” optimisers. So there’s a trade-off between (1) avoiding human irrationalities and (2) robustness to ‘going off the rails’. (See Section 3.1.) I see realistic typical human behaviour on one extreme of the tradeoff, and argmax on the other.