The more well-specified version of Transparent Newcomb says that Omega only puts $1M in the box if he predicts you will one-box regardless of what you see.
In that version, there’s no paradox: anyone that goes in with the mentality you describe will end up seeing $1000 and $0. Their predictable decision of “change my choice based on what I see” is what will have caused this, and it fulfills Omega’s prediction.
I’m not sure there remains a point to illustrate: if Omega doesn’t predict a player who alters their choice based on what they see, then it’s not a very predictive Omega at all.
It’s likewise not a very predictive Omega if it doesn’t predict the possibility of a player flipping a quantum coin to determine the numbers of boxes to take. That problems can work also for the non-transparent version. (the variation generally used then is again that if the player chooses to use quantum randomness, Omega leaves the opaque box empty. And possibly also kills a puppy :-)
Although some people are mentioning flipping a coin or its equivalent, I didn’t. It’s too easy to say that we are only postulating that Omega can predict your algorithm and that of course he couldn’t predict an external source of randomness.
The point of the transparent version is to illustrate that even without an external source of information, you can run into a paradox—Omega is trying to predict you, but you may be trying to predict Omega as well, in which case predicting what you do may be undecideable for Omega—he can’t even in principle predict what you do, no matter how good he is. Making the boxes transparent is just a way to bypass the inevitable objection of “how can you, a mere human, hope to predict Omega?” by creating a situation where predicting Omega is 100% guaranteed.
The more well-specified version of Transparent Newcomb says that Omega only puts $1M in the box if he predicts you will one-box regardless of what you see.
In that version, there’s no paradox: anyone that goes in with the mentality you describe will end up seeing $1000 and $0. Their predictable decision of “change my choice based on what I see” is what will have caused this, and it fulfills Omega’s prediction.
That’s not transparent Newcomb, that’s transparent Newcomb modified to take out the point I was trying to use it to illustrate.
I’m not sure there remains a point to illustrate: if Omega doesn’t predict a player who alters their choice based on what they see, then it’s not a very predictive Omega at all.
It’s likewise not a very predictive Omega if it doesn’t predict the possibility of a player flipping a quantum coin to determine the numbers of boxes to take. That problems can work also for the non-transparent version. (the variation generally used then is again that if the player chooses to use quantum randomness, Omega leaves the opaque box empty. And possibly also kills a puppy :-)
Although some people are mentioning flipping a coin or its equivalent, I didn’t. It’s too easy to say that we are only postulating that Omega can predict your algorithm and that of course he couldn’t predict an external source of randomness.
The point of the transparent version is to illustrate that even without an external source of information, you can run into a paradox—Omega is trying to predict you, but you may be trying to predict Omega as well, in which case predicting what you do may be undecideable for Omega—he can’t even in principle predict what you do, no matter how good he is. Making the boxes transparent is just a way to bypass the inevitable objection of “how can you, a mere human, hope to predict Omega?” by creating a situation where predicting Omega is 100% guaranteed.