That might give you the right answer when Omega is simulating you perfectly, but presumably you’d want to one-box when Omega was simulating a slightly lossy, non-sentient version of you and only predicted correctly 90% of the time. For that (i.e. for all real world Newcomblike problems), you need a more sophisticated decision theory.
Well no, not necessarily. Again, let’s take Alf’s view. (Note I edited this post recently to correct the display of the matrices)
Remember that Alf has a high probability of 2 boxing, and he knows this about himself. Whether he would actually do better by 1-boxing will depend how well Omega’s “mistaken” simulations are correlated with the (rare, freaky) event that Alf 1 boxes. Basically, Alf knows that Omega is right at least 90% of the time, but this doesn’t require a very sophisticated simulation at all, certainly not in Alf’s own case. Omega can run a very crude simulation, say “a clear” 2-boxer, and not fill box B (so Alf won’t get the $ 1 million. Basically, the game outcome would have a probability matrix like this:
Box B filled. Box B empty.
0. 0.99. Alf 2 boxes
0. 0.01. Alf 1 boxes
Notice that Omega has less than 1% chance of a mistaken prediction.
But, I’m sure you’re thinking, won’t Omega run a fuller simulation with 90% accuracy and produce a probability matrix like this?
Box B filled. Box B empty.
0.099. 0.891. Alf 2 boxes
0.009. 0.001. Alf 1 boxes
Well Omega could do that, but now its probability of error has gone up from 1% to 10%, so why would Omega bother?
Let’s compare to a more basic case: weather forecasting. Say I have a simulation model which takes in the current atmospheric state above a land surface, runs it forward a day, and tries to predict rain. It’s pretty good: if there is going to be rain, then the simulation predicts rain 90% of the time; if there is not going to be rain, then it predicts rain only 10% of the time. But now someone shows me a desert, and asks me to predict rain: I’m not going to use a simulation with a 10% error rate, I’m just going to say “no rain”.
So it seems in the case of Alf. Provided Alf’s chance of 1-boxing is low enough (i.e. lower than the underlying error rate of Omega’s simulations) then Omega can always do best by just saying “a clear 2-boxer” and not filling the B box. Omega may also say to himself “what an utter schmuck” but he can’t fault Alf’s application of decision theory. And this applies whether or not Alf is a causal decision theorist or an evidential decision theorist.
That might give you the right answer when Omega is simulating you perfectly, but presumably you’d want to one-box when Omega was simulating a slightly lossy, non-sentient version of you and only predicted correctly 90% of the time. For that (i.e. for all real world Newcomblike problems), you need a more sophisticated decision theory.
Well no, not necessarily. Again, let’s take Alf’s view. (Note I edited this post recently to correct the display of the matrices)
Remember that Alf has a high probability of 2 boxing, and he knows this about himself. Whether he would actually do better by 1-boxing will depend how well Omega’s “mistaken” simulations are correlated with the (rare, freaky) event that Alf 1 boxes. Basically, Alf knows that Omega is right at least 90% of the time, but this doesn’t require a very sophisticated simulation at all, certainly not in Alf’s own case. Omega can run a very crude simulation, say “a clear” 2-boxer, and not fill box B (so Alf won’t get the $ 1 million. Basically, the game outcome would have a probability matrix like this:
Notice that Omega has less than 1% chance of a mistaken prediction.
But, I’m sure you’re thinking, won’t Omega run a fuller simulation with 90% accuracy and produce a probability matrix like this?
Well Omega could do that, but now its probability of error has gone up from 1% to 10%, so why would Omega bother?
Let’s compare to a more basic case: weather forecasting. Say I have a simulation model which takes in the current atmospheric state above a land surface, runs it forward a day, and tries to predict rain. It’s pretty good: if there is going to be rain, then the simulation predicts rain 90% of the time; if there is not going to be rain, then it predicts rain only 10% of the time. But now someone shows me a desert, and asks me to predict rain: I’m not going to use a simulation with a 10% error rate, I’m just going to say “no rain”.
So it seems in the case of Alf. Provided Alf’s chance of 1-boxing is low enough (i.e. lower than the underlying error rate of Omega’s simulations) then Omega can always do best by just saying “a clear 2-boxer” and not filling the B box. Omega may also say to himself “what an utter schmuck” but he can’t fault Alf’s application of decision theory. And this applies whether or not Alf is a causal decision theorist or an evidential decision theorist.