As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.
Previous discussions of Transparent Newcomb’s problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I’m not sure if consistency in this situation would even be possible for Omega. Any comments?
The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That’s not a lot of permutations. You could specify all 4 exhaustively if you are lazy.
IF (Two box when empty AND One box when full) THEN X IF …
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
I’d say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I’m not sure how to specify some of the branches. Here’s all four possibilities (written in pseudo-code following your example):
IF (Two box when empty And Two box when full) THEN X
IF (One box when empty And One box when full) THEN X
IF (Two box when empty And One box when full) THEN X
IF (One box when empty And Two box when full) THEN X
The rewards for 1 and 2 seem obvious; I’m having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb’s Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there’s two choices, but only one situation. When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I’m the one who is confused.
The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...
In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega’s proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I’d personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).
As for 4… A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.
Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?
what do you choose when you encounter Transparent Newcomb and find the big box empty?
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:
“What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?”
My answer to this question would be dependent upon Omega’s rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing—and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario—the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb’s Problem, for some intuitive reason that I can’t quite articulate.
What’s interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: “Omega has left Box B empty. Therefore he has predicted that I’m going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so.”
I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don’t do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don’t even realize I’m wrong.
It would seem to me that Omega’s actions would be as follows:
IF (Two box when empty And Two box when full) THEN Empty
IF (One box when empty And One box when full) THEN Full
IF (Two box when empty And One box when full) THEN Empty or Full
IF (One box when empty And Two box when full) THEN Refuse to present boxes
Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.
In order for Omega to maintain its high prediction accuracy, though, it is necessary—if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer)
P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4)
therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.
Previous discussions of Transparent Newcomb’s problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.
The problem (such as it is) is that there is ambiguity between the possible coherent specifications, not a complete lack. As your comment points out there are (merely) two possible situations for the player to be in and Omega is able to counter-factually predict the response to either of them, with said responses limited to a boolean. That’s not a lot of permutations. You could specify all 4 exhaustively if you are lazy.
IF (Two box when empty AND One box when full) THEN X
IF …
Any difficulty here is in choosing the set of rewards that most usefully illustrate the interesting aspects of the problem.
I’d say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I’m not sure how to specify some of the branches. Here’s all four possibilities (written in pseudo-code following your example):
IF (Two box when empty And Two box when full) THEN X
IF (One box when empty And One box when full) THEN X
IF (Two box when empty And One box when full) THEN X
IF (One box when empty And Two box when full) THEN X
The rewards for 1 and 2 seem obvious; I’m having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb’s Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there’s two choices, but only one situation. When discussing transparent Newcomb, though, it’s hard to see how this point maps to the latter two situations in a useful and/or interesting way.
Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I’m the one who is confused.
The difference between 2 and 3 becomes more obviously relevant when noise is introduced (eg. 99% accuracy Omega). I choose to take literally nothing in some situations. Some think that is crazy...
In the simplest formulation the payoff for three is undetermined. But not undetermined in the sense that Omega’s proposal is made incoherent. Arbitrary as in Omega can do whatever the heck it wants and still construct a coherent narrative. I’d personally call that an obviously worse decision but for simplicity prefer to define 3 as a defect (Big Box Empty outcome).
As for 4… A payoff of both boxes empty (or both boxes full but contaminated with anthrax spores) seems fitting. But simply leaving the large box empty is sufficient for decision theoretic purposes.
Out of interest, and because your other comments on the subject seem well informed, what do you choose when you encounter Transparent Newcomb and find the big box empty?
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing:
“What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?”
My answer to this question would be dependent upon Omega’s rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing—and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario—the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb’s Problem, for some intuitive reason that I can’t quite articulate.
What’s interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: “Omega has left Box B empty. Therefore he has predicted that I’m going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so.”
I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don’t do things just because somebody said I would do them, even if that somebody has a reputation for being extremely accurate, because then that becomes the only reason it happened in the first place. As with most situations involving acausal reasoning, however, I can only place so much confidence in me being correct, as opposed to me being so confused I don’t even realize I’m wrong.
It would seem to me that Omega’s actions would be as follows:
IF (Two box when empty And Two box when full) THEN Empty
IF (One box when empty And One box when full) THEN Full
IF (Two box when empty And One box when full) THEN Empty or Full
IF (One box when empty And Two box when full) THEN Refuse to present boxes
Cases 1 and 2 are straightforward. Case 3 works for the problem, no matter which set of boxes Omega chooses to leave.
In order for Omega to maintain its high prediction accuracy, though, it is necessary—if Omega predicts that a given player will choose option 4 - that Omega simply refuse to present the transparent boxes to this player. Or, at least, that the number of players who follow the other three options should vastly outnumber the fourth-option players.
This is an interesting response because 4 is basically what Jiro was advocating earlier in the thread, and you’re basically suggesting that Omega wouldn’t even present the opportunity to people who would try to do that. Would you agree with this interpretation of your comment?
Yes, I would.
If we take the assumption, for the moment, that the people who would take option 4 form at least 10% of the population in general (this may be a little low), and we further take the idea that Omega has a track record of success in 99% or more of previous trials (as is often specified in Newcomb-like problems), then it is clear that whatever algorithm Omega is using to decide who to present the boxes to is biased, and biased heavily, against offering the boxes to such a person.
Consider:
P(P) = The probability that Omega will present the boxes to a given person.
P(M|P) = The probability that Omega will fill the boxes correctly (empty for a two-boxer, full for a one-boxer) P(M’|P) = The probability that Omega will fail to fill the boxes correctly
P(O) = The probability that the person will choose option 4
P (M’|O) = 1 (from the definition of option 4) therefore P(M|O) = 0
and if Omega is a perfect predictor, then P(M|O’) = 1 as well.
P (M|P) = 0.99 (from the statement of the problem)
P (O) = 0.1 (assumed)
Now, of all the people to whom boxes are presented, Omega is only getting at most one percent wrong; P(M’|P) ⇐ 0.01. Since P(M’|O) = 1, and P(M’|O’)=0, it follows that P(P|O) ⇐ 0.01.
If Omega is a less than perfect predictor, then P(M’|O’)>0, and P(P|O)<0.01.
And, since P(P|O) = 0.01 < P(O) = 0.1, I therefore conclude that Omega must have a bias—and a fairly strong one—against presenting the boxes to such perverse players.