There’s an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)
Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime’s worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.
(Substitute units as necessary for each ever more extreme deal Omega offers.)
No version of yourself loses—each lives—and one gains enormously. So isn’t accepting Omega’s offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?
There’s an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)
Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime’s worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.
(Substitute units as necessary for each ever more extreme deal Omega offers.)
No version of yourself loses—each lives—and one gains enormously. So isn’t accepting Omega’s offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?