If I understand correctly, all decision theories discussed will two box here, and rightly so- choosing one box doesn’t cause Omega to choose differently since that decision was determined solely by the color of your dot.
Depending on the set-up, “innards-CSAs” may one-box here. Innards-CSAs go back to a particular moment in time (or to their creator’s probability distribution) and ask: “if I had been created at that time, with a (perhaps physically transparent) policy that would one-box, would I get more money than if I had been created with a (perhaps physically transparent) policy that would two-box?”
If your Omega came to use the colored dots in its prediction because one-boxing and two-boxing was correlated with dot-colors, and if the innards-CSA in question is programmed to do its its counterfactual innards-swap back before Omega concluded that this was the correlation, and if your innards-CSA ended up copied (perhaps with variations) such that, if it had had different innards, Omega would have ended up with a different decision-rule… then it will one-box.
And “rightly so” in the view of the innards-CSA… because, by reasoning in this manner, the CSA can increase the odds that Omega has decision-rules that favor its own dot-color. At least according to its own notion of how to reckon counterfactuals.
Depending on your beliefs about what computation Omega did to choose its policy, the TDT counterfactual comes out as either “If things like me one-boxed, then Omega would put $1m into box B on seeing a blue dot” or “If things like me one-boxed, then Omega would still have decided to leave B empty when seeing a blue dot, and so if things like me one-boxed I would get nothing.”
I see your point, which is why I made sure to write “In addition, all people with working brains have chosen two boxes in the past.”
My point is that you can have situations where there is a strong correlation so that Omega nearly always predicts correctly, but that Omega’s prediction isn’t caused by the output of the algorithm you use to compute your decisions, so you should two box.
The lack of effort to distinguish between the two cases seems to have generated a lot of confusion (at least it got me for a while).
Depending on the set-up, “innards-CSAs” may one-box here. Innards-CSAs go back to a particular moment in time (or to their creator’s probability distribution) and ask: “if I had been created at that time, with a (perhaps physically transparent) policy that would one-box, would I get more money than if I had been created with a (perhaps physically transparent) policy that would two-box?”
If your Omega came to use the colored dots in its prediction because one-boxing and two-boxing was correlated with dot-colors, and if the innards-CSA in question is programmed to do its its counterfactual innards-swap back before Omega concluded that this was the correlation, and if your innards-CSA ended up copied (perhaps with variations) such that, if it had had different innards, Omega would have ended up with a different decision-rule… then it will one-box.
And “rightly so” in the view of the innards-CSA… because, by reasoning in this manner, the CSA can increase the odds that Omega has decision-rules that favor its own dot-color. At least according to its own notion of how to reckon counterfactuals.
Depending on your beliefs about what computation Omega did to choose its policy, the TDT counterfactual comes out as either “If things like me one-boxed, then Omega would put $1m into box B on seeing a blue dot” or “If things like me one-boxed, then Omega would still have decided to leave B empty when seeing a blue dot, and so if things like me one-boxed I would get nothing.”
I see your point, which is why I made sure to write “In addition, all people with working brains have chosen two boxes in the past.”
My point is that you can have situations where there is a strong correlation so that Omega nearly always predicts correctly, but that Omega’s prediction isn’t caused by the output of the algorithm you use to compute your decisions, so you should two box.
The lack of effort to distinguish between the two cases seems to have generated a lot of confusion (at least it got me for a while).