How Omega decides what to predict or what makes it’s stated condition for B (aka. result of “prediction”) come true, is not relevant. Ignoring the data that says it’s always/almost always correct, however, seems … not right. Any decision must be made with the understanding that Omega is most likely to predict it. You can’t outsmart it by failing to update it’s expected state of mind in the last second. The moment you decide to two-box is the moment Omega predicted, when it chose to empty box B.
Consider this:
Andy: “Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. … Now that the boxes are in front of me, I’m thinking I should take both. Because, you know, two is better than one. And it’s already decided, so my choice won’t change anything. Both boxes.”
Barry: “Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. … Now that the boxes are in front of me, I’m thinking I should take both. Because, you know, two is better than one. Of course the outcome still depends on what Omega predicted. Say I choose both boxes. So if Omega’s prediction is correct this time, I will find an empty B. But maybe Omega was wrong THIS time. Sure, and maybe THIS time I will also win the lottery. How it would have known is not relevant. The fact that O already acted on it’s prediction doesn’t make it more likely to be wrong. Really, what is the dilemma here? One box.”
Ok, I don’t expect that I’m the first person to say all this. But then, I wouldn’t have expected anybody to two-box, either.
I can see the relation to Newcomb—this is also a weird counterfactual that will never happen. I haven’t deliberately touched a hot stove in my adult life, and don’t expect to. I certainly won’t get to 99 times.
Naive argument coming up.
How Omega decides what to predict or what makes it’s stated condition for B (aka. result of “prediction”) come true, is not relevant. Ignoring the data that says it’s always/almost always correct, however, seems … not right. Any decision must be made with the understanding that Omega is most likely to predict it. You can’t outsmart it by failing to update it’s expected state of mind in the last second. The moment you decide to two-box is the moment Omega predicted, when it chose to empty box B.
Consider this:
Andy: “Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. … Now that the boxes are in front of me, I’m thinking I should take both. Because, you know, two is better than one. And it’s already decided, so my choice won’t change anything. Both boxes.”
Barry: “Sure, one box seems like the good choice, because Omega would take the million away otherwise. OK. … Now that the boxes are in front of me, I’m thinking I should take both. Because, you know, two is better than one. Of course the outcome still depends on what Omega predicted. Say I choose both boxes. So if Omega’s prediction is correct this time, I will find an empty B. But maybe Omega was wrong THIS time. Sure, and maybe THIS time I will also win the lottery. How it would have known is not relevant. The fact that O already acted on it’s prediction doesn’t make it more likely to be wrong. Really, what is the dilemma here? One box.”
Ok, I don’t expect that I’m the first person to say all this. But then, I wouldn’t have expected anybody to two-box, either.
major said:
You’re not the only person to wonder this. Either I’m missing something, or two-boxers just fail at induction.
I have to wonder how two-boxers would do on the “Hot Stove Problem.”
In case you guys haven’t heard of such a major problem in philosophy, I will briefly explain the Hot Stove Problem:
You have touched a hot stove 100 times. 99 times you have been burned. Nothing has changed about the stove that you know about. Do you touch it again?
I can see the relation to Newcomb—this is also a weird counterfactual that will never happen. I haven’t deliberately touched a hot stove in my adult life, and don’t expect to. I certainly won’t get to 99 times.