(5) Omega uses ordinary conjuring, or heretofore-unknown powers to put the million in the box after you make your decision. Solution: one-box for sure, no decision theory trickery needed. This would be in practice the conclusion we would come to if we encountered a being that appeared to behave like Omega, and therefore is also the answer in any scenario where we don’t know the true implementation of Omega (ie any real scenario).
If the boxes are transparent, resolve to one-box iff the big box is empty.
I outlined a few more possibilities on Overcoming Bias last year:
There are many ways Omega could be doing the prediction/placement and it may well matter exactly how the problem is set up. For example, you might be deterministic and he is precalculating your choice (much like we might be able to do with an insect or computer program), or he might be using a quantum suicide method, (quantum) randomizing whether the million goes in and then destroying the world iff you pick the wrong option (This will lead to us observing him being correct 100⁄100 times assuming a many worlds interpretation of QM). Or he could have just got lucky with the last 100 people he tried it on.
If it is the deterministic option, then what do the counterfactuals about choosing the other box even mean? My approach is to say that ‘You could choose X’ means that if you had desired to choose X, then you would have. This is a standard way of understanding ‘could’ in a deterministic universe. Then the answer depends on how we suppose the world to be different to give you counterfactual desires. If we do it with a miracle near the moment of choice (history is the same, but then your desires change non-physically), then you ought two-box as Omega can’t have predicted this. If we do it with an earlier miracle, or with a change to the initial conditions of the universe (the Tannsjo interpretation of counterfactuals) then you ought one-box as Omega would have predicted your choice. Thus, if we are understanding Omega as extrapolating your deterministic thinking, then the answer will depend on how we understand the counterfactuals. One-boxers and Two-boxers would be people who interpret the natural counterfactual in the example in different (and equally valid) ways.
If we understand it as Omega using a quantum suicide method, then the objectively right choice depends on his initial probabilities of putting the million in the box. If he does it with a 50% chance, then take just one box. There is a 50% chance the world will end either choice, but this way, in the case where it doesn’t, you will have a million rather than a thousand. If, however, he uses a 99% chance of putting nothing in the box, then one-boxing has a 99% chance of destroying the world which dominates the value of the extra money, so instead two-box, take the thousand and live.
If he just got lucky a hundred times, then you are best off two-boxing.
If he time travels, then it depends on the nature of time-travel...
Thus the answer depends on key details not told to us at the outset. Some people accuse all philosophical examples (like the trolley problems) of not giving enough information, but in those cases it is fairly obvious how we are expected to fill in the details. This is not true here. I don’t think the Newcomb problem has a single correct answer. The value of it is to show us the different possibilities that could lead to the situation as specified and to see how they give different answers, hopefully illuminating the topic of free-will, counterfactuals and prediction.
There’s a (6) which you might consider a variant of (5): having made his best guess on whether you’re going to going to one-box or two-box, Omega enforces that guess with orbital mind control lasers.
(5) Omega uses ordinary conjuring, or heretofore-unknown powers to put the million in the box after you make your decision. Solution: one-box for sure, no decision theory trickery needed. This would be in practice the conclusion we would come to if we encountered a being that appeared to behave like Omega, and therefore is also the answer in any scenario where we don’t know the true implementation of Omega (ie any real scenario).
If the boxes are transparent, resolve to one-box iff the big box is empty.
Good! Now we have some terminology for future generations:
1) Temporal Omega 2) Simulator Omega 3) Terminating Omega 4) Singleton Omega 5) Cheating Omega
Great point about the prior, thanks.
I outlined a few more possibilities on Overcoming Bias last year:
There are many ways Omega could be doing the prediction/placement and it may well matter exactly how the problem is set up. For example, you might be deterministic and he is precalculating your choice (much like we might be able to do with an insect or computer program), or he might be using a quantum suicide method, (quantum) randomizing whether the million goes in and then destroying the world iff you pick the wrong option (This will lead to us observing him being correct 100⁄100 times assuming a many worlds interpretation of QM). Or he could have just got lucky with the last 100 people he tried it on.
If it is the deterministic option, then what do the counterfactuals about choosing the other box even mean? My approach is to say that ‘You could choose X’ means that if you had desired to choose X, then you would have. This is a standard way of understanding ‘could’ in a deterministic universe. Then the answer depends on how we suppose the world to be different to give you counterfactual desires. If we do it with a miracle near the moment of choice (history is the same, but then your desires change non-physically), then you ought two-box as Omega can’t have predicted this. If we do it with an earlier miracle, or with a change to the initial conditions of the universe (the Tannsjo interpretation of counterfactuals) then you ought one-box as Omega would have predicted your choice. Thus, if we are understanding Omega as extrapolating your deterministic thinking, then the answer will depend on how we understand the counterfactuals. One-boxers and Two-boxers would be people who interpret the natural counterfactual in the example in different (and equally valid) ways.
If we understand it as Omega using a quantum suicide method, then the objectively right choice depends on his initial probabilities of putting the million in the box. If he does it with a 50% chance, then take just one box. There is a 50% chance the world will end either choice, but this way, in the case where it doesn’t, you will have a million rather than a thousand. If, however, he uses a 99% chance of putting nothing in the box, then one-boxing has a 99% chance of destroying the world which dominates the value of the extra money, so instead two-box, take the thousand and live.
If he just got lucky a hundred times, then you are best off two-boxing.
If he time travels, then it depends on the nature of time-travel...
Thus the answer depends on key details not told to us at the outset. Some people accuse all philosophical examples (like the trolley problems) of not giving enough information, but in those cases it is fairly obvious how we are expected to fill in the details. This is not true here. I don’t think the Newcomb problem has a single correct answer. The value of it is to show us the different possibilities that could lead to the situation as specified and to see how they give different answers, hopefully illuminating the topic of free-will, counterfactuals and prediction.
There’s a (6) which you might consider a variant of (5): having made his best guess on whether you’re going to going to one-box or two-box, Omega enforces that guess with orbital mind control lasers.