Here’s another variation: Newcomb’s problem is as is usually presented, including Omega being able to predict what box you would take and putting money in the boxes accordingly—except in this case the boxes are transparent. Furthermore, you think Omega is a little snooty and would like to take him down a peg. You value this at more than $1000. What do you do?
Obviously, if you see $1000 and $1M, you pick both boxes because that is good both from the monetary and anti-Omega perspective. If you see $1000 and $0, the anti-Omega perspective rules and you pick one box.
Unfortunately, Omega always predicts correctly. So if you picked both boxes, the boxes contain $1000 and $0, while if you picked one box, they contain $1000 and $1M. But that contradicts what I said you just did....
(In fact the regular version of the problem is subject to this too. Execute the strategy “predict what Omega did to the boxes and make the opposite choice”. Having transparent boxes just gives you 100% accuracy in your prediction.)
This version of Transparent Newcomb is ill-defined, because Omega’s decision process is not well-specified. If you do a different thing depend on what money is in the boxes, there’s no unique correct prediction. Normally, Transparent Newcomb involves Omega predicting what you would do with the large box set to either empty or full (the “two arms” of Transparent Newcomb).
Also, I don’t think “predict what Omega did to the boxes and make the opposite choice” is much of a problem either. You can’t simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc
Omega’s decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.
You can’t simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc
Yes, of course, but you can’t be an imperfect predictor either, unless you’re imperfect in a very specific way. Imagine that there’s a 25% chance you correctly predict what Omega does—in that case, Omega still can’t be a perfect predictor. The only real difference between the transparent and nontransparent versions (if you still like taking Omega down a peg) is that the transparent version guarantees that you can correctly “predict” what Omega did.
Omega’s decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.
If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you’ll do. The non-transparent version does not have this problem.
If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you’ll do. The non-transparent version does not have this problem.
But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can’t be a perfect predictor, which breaks the specification of the problem.
These are all trivial objections. In the same manner you can “break the problem” by saying “well, what if the players chooses to burn both boxes?” “What if the player walks away?” “What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?”.
Player walks in the room, recites Vogon poetry, and then shoots themselves in the head. We then open Box A. Inside we see a note that says “I predict that the player will walk in the room, recite Vogon poetry and then shoot themselves in the head without taking any of the boxes”.
These objections don’t really illuminate anything about the problem. There’s nothing inconsistent about Omega predicting you’re going to do any of these things, and having different contents in the box prefilled according to said prediction. That the original phrasing of the problem doesn’t list all of the various possibilities is really again just a silly meaningless objection.
Your objections are of a different character. Any of these
In the same manner you can “break the problem” by saying “well, what if the players chooses to burn both boxes?” “What if the player walks away?” “What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?”
involve not picking boxes. The issue with the coin flip is to point out that there are algorithms for box picking that are unpredictable. There are methods of picking that make it impossible for Omega to have perfect accuracy. Whether or not Newcomb is coherent depends on your model of how people make choices, and how noisy that process is.
But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can’t be a perfect predictor, which breaks the specification of the problem.
The premise of the thought experiment is that Omega has come to you and said, “I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly”.
If Omega knows enough to predict whether you’ll one-box or two-box, then Omega knows enough to predict whether you’re going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.
Therefore, this Omega doesn’t play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the “one box or two” game accurately, then we have the narrative power to stipulate an Omega that doesn’t bother playing it with people who are going to break the premise of the thought experiment.
In programmer-speak, we would say that Omega’s behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.
Either Omega has perfect predictive power over minds AND coins, or it doesn’t.
If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you’re really saying is “give me a 50⁄50 gamble with a net payoff of $500,500”, instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb’s Omega has no reason to want to play the game with you.
If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Its not breaking the thought experiment on a “bogus technicality” its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.
The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.
The more well-specified version of Transparent Newcomb says that Omega only puts $1M in the box if he predicts you will one-box regardless of what you see.
In that version, there’s no paradox: anyone that goes in with the mentality you describe will end up seeing $1000 and $0. Their predictable decision of “change my choice based on what I see” is what will have caused this, and it fulfills Omega’s prediction.
I’m not sure there remains a point to illustrate: if Omega doesn’t predict a player who alters their choice based on what they see, then it’s not a very predictive Omega at all.
It’s likewise not a very predictive Omega if it doesn’t predict the possibility of a player flipping a quantum coin to determine the numbers of boxes to take. That problems can work also for the non-transparent version. (the variation generally used then is again that if the player chooses to use quantum randomness, Omega leaves the opaque box empty. And possibly also kills a puppy :-)
Although some people are mentioning flipping a coin or its equivalent, I didn’t. It’s too easy to say that we are only postulating that Omega can predict your algorithm and that of course he couldn’t predict an external source of randomness.
The point of the transparent version is to illustrate that even without an external source of information, you can run into a paradox—Omega is trying to predict you, but you may be trying to predict Omega as well, in which case predicting what you do may be undecideable for Omega—he can’t even in principle predict what you do, no matter how good he is. Making the boxes transparent is just a way to bypass the inevitable objection of “how can you, a mere human, hope to predict Omega?” by creating a situation where predicting Omega is 100% guaranteed.
Thank you for simply illustrating how easily the assumption of accurate predictions can contradict the assumption that we can choose our decision algorithm.
I would steelman the OP by saying that you should precommit to the above strategy if for some reason you want to avoid playing this version of Newcomb’s problem, since this attitude guarantees that you won’t.
I would like to draw further attention to this. Assuming Omega to be a perfect predictor opens the door to all kinds of logical contradictions along the lines of “I’m going to do the opposite of the prediction, regardless of what it happens to be.”
Here’s another variation: Newcomb’s problem is as is usually presented, including Omega being able to predict what box you would take and putting money in the boxes accordingly—except in this case the boxes are transparent. Furthermore, you think Omega is a little snooty and would like to take him down a peg. You value this at more than $1000. What do you do?
Obviously, if you see $1000 and $1M, you pick both boxes because that is good both from the monetary and anti-Omega perspective. If you see $1000 and $0, the anti-Omega perspective rules and you pick one box.
Unfortunately, Omega always predicts correctly. So if you picked both boxes, the boxes contain $1000 and $0, while if you picked one box, they contain $1000 and $1M. But that contradicts what I said you just did....
(In fact the regular version of the problem is subject to this too. Execute the strategy “predict what Omega did to the boxes and make the opposite choice”. Having transparent boxes just gives you 100% accuracy in your prediction.)
This version of Transparent Newcomb is ill-defined, because Omega’s decision process is not well-specified. If you do a different thing depend on what money is in the boxes, there’s no unique correct prediction. Normally, Transparent Newcomb involves Omega predicting what you would do with the large box set to either empty or full (the “two arms” of Transparent Newcomb).
Also, I don’t think “predict what Omega did to the boxes and make the opposite choice” is much of a problem either. You can’t simultaneously be perfect predictors of each other, because that would let you predict yourself, etc etc
Omega’s decision process is as well-specified as it is in the non-transparent version: Omega predicts your choice of boxes and uses the result of that prediction to decide what to put in the boxes.
Yes, of course, but you can’t be an imperfect predictor either, unless you’re imperfect in a very specific way. Imagine that there’s a 25% chance you correctly predict what Omega does—in that case, Omega still can’t be a perfect predictor. The only real difference between the transparent and nontransparent versions (if you still like taking Omega down a peg) is that the transparent version guarantees that you can correctly “predict” what Omega did.
A flipped coin has a 50% chance to correctly predict what Omega does, if Omega is allowed only two courses of action.
If your choice of boxes depends on what you observe, he needs to decide whether you see an empty box or a full box before he can predict what you’ll do. The non-transparent version does not have this problem.
But we can still break it in similar ways. Pre-commit to flipping a coin (or some other random variable) to make your choice, and Omega can’t be a perfect predictor, which breaks the specification of the problem.
These are all trivial objections. In the same manner you can “break the problem” by saying “well, what if the players chooses to burn both boxes?” “What if the player walks away?” “What if the player recites Vogon poetry and then shoots himself in the head without taking any of the boxes?”.
Player walks in the room, recites Vogon poetry, and then shoots themselves in the head.
We then open Box A. Inside we see a note that says “I predict that the player will walk in the room, recite Vogon poetry and then shoot themselves in the head without taking any of the boxes”.
These objections don’t really illuminate anything about the problem. There’s nothing inconsistent about Omega predicting you’re going to do any of these things, and having different contents in the box prefilled according to said prediction. That the original phrasing of the problem doesn’t list all of the various possibilities is really again just a silly meaningless objection.
Your objections are of a different character. Any of these
involve not picking boxes. The issue with the coin flip is to point out that there are algorithms for box picking that are unpredictable. There are methods of picking that make it impossible for Omega to have perfect accuracy. Whether or not Newcomb is coherent depends on your model of how people make choices, and how noisy that process is.
The premise of the thought experiment is that Omega has come to you and said, “I have two boxes here, and know whether you are going to open one box or two boxes, and thus have filled the boxes accordingly”.
If Omega knows enough to predict whether you’ll one-box or two-box, then Omega knows enough to predict whether you’re going to flip a coin, do a dance, kill yourself, or otherwise break that premise. Since the frame story is that the premise holds, then clearly Omega has predicted that you will either one-box or two-box.
Therefore, this Omega doesn’t play this game with people who do something silly instead of one-boxing or two-boxing. Maybe it just ignores those people. Maybe it plays another game. But the point is, if we have the narrative power to stipulate an Omega that plays the “one box or two” game accurately, then we have the narrative power to stipulate an Omega that doesn’t bother playing it with people who are going to break the premise of the thought experiment.
In programmer-speak, we would say that Omega’s behavior is undefined in these circumstances, and it is legal for Omega to make demons fly out of your nose in response to such cleverness.
Flipping a coin IS one boxing Or two boxing! Its just not doing it PREDICTABLY.
ಠ_ಠ
EDIT: Okay, I’ll engage.
Either Omega has perfect predictive power over minds AND coins, or it doesn’t.
If it has perfect predictive power over minds AND coins, then it knows which way the flip will go, and what you’re really saying is “give me a 50⁄50 gamble with a net payoff of $500,500”, instead of $1,000,000 OR $1,000 - in which case you are not a rational actor and Newcomb’s Omega has no reason to want to play the game with you.
If it only has predictive power over minds, then neither it nor you know which way the flip will go, and the premise is broken. Since you accepted the premise when you said “if Omega shows up, I would...”, then you must not be the sort of person who would pre-commit to an unpredictable coinflip, and you’re just trying to signal cleverness by breaking the thought experiment on a bogus technicality.
Please don’t do that.
Its not breaking the thought experiment on a “bogus technicality” its pointing out that the thought experiment is only coherent if we make some pretty significant assumptions about how people make decisions. The more noisy we believe human decision making is, the less perfect omega can be.
The paradox still raises the same point for decisions algorithms, but the coin flip underscores that the problem can be ill-defined for decisions algorithms that incorporate noisy inputs.
The more well-specified version of Transparent Newcomb says that Omega only puts $1M in the box if he predicts you will one-box regardless of what you see.
In that version, there’s no paradox: anyone that goes in with the mentality you describe will end up seeing $1000 and $0. Their predictable decision of “change my choice based on what I see” is what will have caused this, and it fulfills Omega’s prediction.
That’s not transparent Newcomb, that’s transparent Newcomb modified to take out the point I was trying to use it to illustrate.
I’m not sure there remains a point to illustrate: if Omega doesn’t predict a player who alters their choice based on what they see, then it’s not a very predictive Omega at all.
It’s likewise not a very predictive Omega if it doesn’t predict the possibility of a player flipping a quantum coin to determine the numbers of boxes to take. That problems can work also for the non-transparent version. (the variation generally used then is again that if the player chooses to use quantum randomness, Omega leaves the opaque box empty. And possibly also kills a puppy :-)
Although some people are mentioning flipping a coin or its equivalent, I didn’t. It’s too easy to say that we are only postulating that Omega can predict your algorithm and that of course he couldn’t predict an external source of randomness.
The point of the transparent version is to illustrate that even without an external source of information, you can run into a paradox—Omega is trying to predict you, but you may be trying to predict Omega as well, in which case predicting what you do may be undecideable for Omega—he can’t even in principle predict what you do, no matter how good he is. Making the boxes transparent is just a way to bypass the inevitable objection of “how can you, a mere human, hope to predict Omega?” by creating a situation where predicting Omega is 100% guaranteed.
Thank you for simply illustrating how easily the assumption of accurate predictions can contradict the assumption that we can choose our decision algorithm.
I would steelman the OP by saying that you should precommit to the above strategy if for some reason you want to avoid playing this version of Newcomb’s problem, since this attitude guarantees that you won’t.
I would like to draw further attention to this. Assuming Omega to be a perfect predictor opens the door to all kinds of logical contradictions along the lines of “I’m going to do the opposite of the prediction, regardless of what it happens to be.”