It’s not a question of whether Omega is fallible or not, it’s a question of whether Omega’s prediction (no matter how incorrect) is dependent on the decision you are going to make (backwards causality), or only on decisions you have made in the past (no backwards causality). The first case is uninteresting since it cannot occur in reality, and in the second case it is always better to two-box, no matter the payouts or the probability of Omega being wrong.
If Omega is 100% sure you’re one-boxing, you should two-box.
If Omega is 75% sure you’re one-boxing, you should two-box.
If Omega is 50% sure you’re one-boxing, you should two-box.
If Omega is 25% sure you’re one-boxing, you should two-box.
If Omega is 0% sure you’re one-boxing, you should two-box.
What if Omega makes an identical copy of you, puts the copy in an identical situation, and uses the copy’s decision to predict what you will do? Is “whatever I decide to do, my copy will have decided the same thing” a valid argument?
No, because if Omega tells you that, then you have information that your copy doesn’t, which means that it’s not an identical situation; and if Omega doesn’t tell you, then you might just as well be the copy itself, meaning that either you can’t be predicted or you’re not playing Newcomb.
If Omega tells both of you the same thing, it lies to one of you; and in that case you’re not playing Newcomb either.
and if Omega doesn’t tell you, then you might just as well be the copy itself, meaning that either you can’t be predicted or you’re not playing Newcomb.
That’s certainly the situation I have in mind (although certainly Omega can tell both of you “I have made a copy of the person that walked into this room to simulate; you are either the copy or the original” or something to that effect). But I don’t see how either one of “you can’t be predicted or you’re not playing Newcomb” makes sense.
If Omega is 100% sure you’re one-boxing, you can one-box and get $1,000,000 or you can two-box and get $1,001,000. You cannot make the argument that one-boxing is better in this case unless you argue that your decision affects Omega’s prediction, and that would be backwards causality. If you think backwards causality is a possibility, that’s fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.
If you think backwards causality is a possibility, that’s fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.
Backwards causality cannot exist. I still take one box. I get the money. You don’t. Your reasoning fails.
On a related note: The universe is (as far as I know) entirely deterministic. I still have free will.
It’s not completely clear what “backward causality” (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn’t seem relevant) or taboo/clarify it.
It’s not completely clear what “backward causality” (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn’t seem relevant) or taboo/clarify it.
The meaning of what Andreas was saying was sufficiently clear. He means “you know, stuff like flipping time travel and changing the goddamn past”. Trying to taboo causality and sending everyone off to read Pearl would be a distraction. Possibly a more interesting distraction than another “CDT one boxes! Oh, um.… wait… No, Newcomb’s doesn’t exist. Err… I mean CDT two boxes and it is right to do so so there!” conversation but not an overwhelmingly relevant one.
He means “you know, stuff like flipping time travel and changing the goddamn past”.
We are in a certain sense talking about determining the past, the distinction is between shared structure (as in, the predictor has your source code) and time machines. The main problem seems to be unwillingness to carefully consider the meaning of implausible hypotheticals, and continued distraction by the object level dispute doesn’t seem to help.
(“Changing” vs. “determining” point should probably be discussed in the context of the future, where implausibility and fiction are less of a distraction.)
If backwards causality cannot exist, would you say that your decision can affect the prediction that Omega made before you made your decision?
No. Both the prediction and my decision came about due to past states of the universe (including my brain). They do not influence each other directly. I still take one box and get $1,000,000 and that is the best possible outcome.o. Both the prediction and my decision came about due to past states of the universe (including my brain). I still take one box and get $1,000,000 and that is the best possible outcome I can expect.
It’s not a question of whether Omega is fallible or not, it’s a question of whether Omega’s prediction (no matter how incorrect) is dependent on the decision you are going to make (backwards causality), or only on decisions you have made in the past (no backwards causality). The first case is uninteresting since it cannot occur in reality, and in the second case it is always better to two-box, no matter the payouts or the probability of Omega being wrong.
If Omega is 100% sure you’re one-boxing, you should two-box.
If Omega is 75% sure you’re one-boxing, you should two-box.
If Omega is 50% sure you’re one-boxing, you should two-box.
If Omega is 25% sure you’re one-boxing, you should two-box.
If Omega is 0% sure you’re one-boxing, you should two-box.
What if Omega makes an identical copy of you, puts the copy in an identical situation, and uses the copy’s decision to predict what you will do? Is “whatever I decide to do, my copy will have decided the same thing” a valid argument?
No, because if Omega tells you that, then you have information that your copy doesn’t, which means that it’s not an identical situation; and if Omega doesn’t tell you, then you might just as well be the copy itself, meaning that either you can’t be predicted or you’re not playing Newcomb.
If Omega tells both of you the same thing, it lies to one of you; and in that case you’re not playing Newcomb either.
Could you elaborate on this?
That’s certainly the situation I have in mind (although certainly Omega can tell both of you “I have made a copy of the person that walked into this room to simulate; you are either the copy or the original” or something to that effect). But I don’t see how either one of “you can’t be predicted or you’re not playing Newcomb” makes sense.
If you’re the copy that Omega bases its prediction of the other copy on, how does Omega predict you?
Unless you like money, in which case you should one box.
If Omega is 100% sure you’re one-boxing, you can one-box and get $1,000,000 or you can two-box and get $1,001,000. You cannot make the argument that one-boxing is better in this case unless you argue that your decision affects Omega’s prediction, and that would be backwards causality. If you think backwards causality is a possibility, that’s fine and you should one-box; but then you still have to agree that under the assumption that backwards causality cannot exist, two-boxing wins.
Backwards causality cannot exist. I still take one box. I get the money. You don’t. Your reasoning fails.
On a related note: The universe is (as far as I know) entirely deterministic. I still have free will.
It’s not completely clear what “backward causality” (or any causality, outside the typical contexts) means, so maybe it can exist. Better to either ignore the concept in this context (as it doesn’t seem relevant) or taboo/clarify it.
The meaning of what Andreas was saying was sufficiently clear. He means “you know, stuff like flipping time travel and changing the goddamn past”. Trying to taboo causality and sending everyone off to read Pearl would be a distraction. Possibly a more interesting distraction than another “CDT one boxes! Oh, um.… wait… No, Newcomb’s doesn’t exist. Err… I mean CDT two boxes and it is right to do so so there!” conversation but not an overwhelmingly relevant one.
We are in a certain sense talking about determining the past, the distinction is between shared structure (as in, the predictor has your source code) and time machines. The main problem seems to be unwillingness to carefully consider the meaning of implausible hypotheticals, and continued distraction by the object level dispute doesn’t seem to help.
(“Changing” vs. “determining” point should probably be discussed in the context of the future, where implausibility and fiction are less of a distraction.)
If backwards causality cannot exist, would you say that your decision can affect the prediction that Omega made before you made your decision?
No. Both the prediction and my decision came about due to past states of the universe (including my brain). They do not influence each other directly. I still take one box and get $1,000,000 and that is the best possible outcome.o. Both the prediction and my decision came about due to past states of the universe (including my brain). I still take one box and get $1,000,000 and that is the best possible outcome I can expect.