The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.
Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world—and your duty is clear:
So why Eliezer’s insistence on using a logical coin-flip? Because, I suspect, it prevents many-worlds from being relevant. Logical coin-flips don’t create possible worlds the way quantum coin-flips do.
But what is a logical coin-flip, anyway?
Using the example given at the top of this post, an agent that was not only rational but clever would sit down and calculate the 256th binary digit of pi before answering. Picking a more difficult logical coin-flip just makes the calculation more difficult; a more intelligent agent could solve it, even if you can’t.
So there are two different kinds of logical coin-flips: the sort that are indistinguishable from quantum coin-flips even in principle, in which case they ought to cause the same sort of branching events under many-worlds—and the sort that are solvable, but only by someone smarter than you.
If you’re not smart enough to solve the logical coin-flip, you may as well treat it as a quantum coin-flip, because it’s already been established that you can’t possibly do better. That doesn’t mean your decision algorithm is flawed; just that if you were more powerful, it would be more powerful too.
The more I think about this, the more I suspect that the problem lies in the distinction between quantum and logical coin-flips.
Suppose this experiment is carried out with a quantum coin-flip. Then, under many-worlds, both outcomes are realized in different branches. There are 40 future selves--2 red and 18 green in one world, 18 red and 2 green in the other world—and your duty is clear:
Don’t take the bet.
So why Eliezer’s insistence on using a logical coin-flip? Because, I suspect, it prevents many-worlds from being relevant. Logical coin-flips don’t create possible worlds the way quantum coin-flips do.
But what is a logical coin-flip, anyway?
Using the example given at the top of this post, an agent that was not only rational but clever would sit down and calculate the 256th binary digit of pi before answering. Picking a more difficult logical coin-flip just makes the calculation more difficult; a more intelligent agent could solve it, even if you can’t.
So there are two different kinds of logical coin-flips: the sort that are indistinguishable from quantum coin-flips even in principle, in which case they ought to cause the same sort of branching events under many-worlds—and the sort that are solvable, but only by someone smarter than you.
If you’re not smart enough to solve the logical coin-flip, you may as well treat it as a quantum coin-flip, because it’s already been established that you can’t possibly do better. That doesn’t mean your decision algorithm is flawed; just that if you were more powerful, it would be more powerful too.