Omega can’t predict that type of event without being a pre-cog.
Assume that the person choosing the boxes is a whole brain emulation, and that Omega runs a perfect simulation, which guarantees formal identity of Omega’s prediction and person’s actual decision, even though the computations are performed separately.
So the chooser in this case is a fully deterministic system, not a real-live human brain with some chance of random firings screwing up Omega’s prediction?
Wow, that’s an interesting case, and I hadn’t really thought about it! One interesting point though—suppose I am the chooser in that case; how can I tell which simulation I am? Am I the one which runs after Omega made its prediction? Or am I the one which Omega runs in order to make its prediction, and which does have a genuine causal effect on what goes in the boxes? It seems I have no way of telling, and I might (in some strange sense) be both of them. So causal decision theory might advise me to 1-box after all.
So the chooser in this case is a fully deterministic system, not a real-live human brain with some chance of random firings screwing up Omega’s prediction?
This is more of a way of pointing out a special case that shares relevant considerations with TDT-like approach to decision theory (in this extreme identical-simulation case it’s just Hofstadter’s “superrationality”).
If we start from this case and gradually make the prediction model and the player less and less similar to each other (perhaps by making the model less detailed), at which point do the considerations that make you one-box in this edge case break? Clearly, if you change the prediction model just a little bit, correct answer shouldn’t immediately flip, but CDT is no longer applicable out-of-the-box (arguably, even if you “control” two identical copies, it’s also not directly applicable). Thus, a need for generalization that admits imperfect acausal “control” over sufficiently similar decision-makers (and sufficiently accurate predictions) in the same sense in which you “control” your identical copies.
That might give you the right answer when Omega is simulating you perfectly, but presumably you’d want to one-box when Omega was simulating a slightly lossy, non-sentient version of you and only predicted correctly 90% of the time. For that (i.e. for all real world Newcomblike problems), you need a more sophisticated decision theory.
Well no, not necessarily. Again, let’s take Alf’s view. (Note I edited this post recently to correct the display of the matrices)
Remember that Alf has a high probability of 2 boxing, and he knows this about himself. Whether he would actually do better by 1-boxing will depend how well Omega’s “mistaken” simulations are correlated with the (rare, freaky) event that Alf 1 boxes. Basically, Alf knows that Omega is right at least 90% of the time, but this doesn’t require a very sophisticated simulation at all, certainly not in Alf’s own case. Omega can run a very crude simulation, say “a clear” 2-boxer, and not fill box B (so Alf won’t get the $ 1 million. Basically, the game outcome would have a probability matrix like this:
Box B filled. Box B empty.
0. 0.99. Alf 2 boxes
0. 0.01. Alf 1 boxes
Notice that Omega has less than 1% chance of a mistaken prediction.
But, I’m sure you’re thinking, won’t Omega run a fuller simulation with 90% accuracy and produce a probability matrix like this?
Box B filled. Box B empty.
0.099. 0.891. Alf 2 boxes
0.009. 0.001. Alf 1 boxes
Well Omega could do that, but now its probability of error has gone up from 1% to 10%, so why would Omega bother?
Let’s compare to a more basic case: weather forecasting. Say I have a simulation model which takes in the current atmospheric state above a land surface, runs it forward a day, and tries to predict rain. It’s pretty good: if there is going to be rain, then the simulation predicts rain 90% of the time; if there is not going to be rain, then it predicts rain only 10% of the time. But now someone shows me a desert, and asks me to predict rain: I’m not going to use a simulation with a 10% error rate, I’m just going to say “no rain”.
So it seems in the case of Alf. Provided Alf’s chance of 1-boxing is low enough (i.e. lower than the underlying error rate of Omega’s simulations) then Omega can always do best by just saying “a clear 2-boxer” and not filling the B box. Omega may also say to himself “what an utter schmuck” but he can’t fault Alf’s application of decision theory. And this applies whether or not Alf is a causal decision theorist or an evidential decision theorist.
Assume that the person choosing the boxes is a whole brain emulation, and that Omega runs a perfect simulation, which guarantees formal identity of Omega’s prediction and person’s actual decision, even though the computations are performed separately.
So the chooser in this case is a fully deterministic system, not a real-live human brain with some chance of random firings screwing up Omega’s prediction?
Wow, that’s an interesting case, and I hadn’t really thought about it! One interesting point though—suppose I am the chooser in that case; how can I tell which simulation I am? Am I the one which runs after Omega made its prediction? Or am I the one which Omega runs in order to make its prediction, and which does have a genuine causal effect on what goes in the boxes? It seems I have no way of telling, and I might (in some strange sense) be both of them. So causal decision theory might advise me to 1-box after all.
This is more of a way of pointing out a special case that shares relevant considerations with TDT-like approach to decision theory (in this extreme identical-simulation case it’s just Hofstadter’s “superrationality”).
If we start from this case and gradually make the prediction model and the player less and less similar to each other (perhaps by making the model less detailed), at which point do the considerations that make you one-box in this edge case break? Clearly, if you change the prediction model just a little bit, correct answer shouldn’t immediately flip, but CDT is no longer applicable out-of-the-box (arguably, even if you “control” two identical copies, it’s also not directly applicable). Thus, a need for generalization that admits imperfect acausal “control” over sufficiently similar decision-makers (and sufficiently accurate predictions) in the same sense in which you “control” your identical copies.
That might give you the right answer when Omega is simulating you perfectly, but presumably you’d want to one-box when Omega was simulating a slightly lossy, non-sentient version of you and only predicted correctly 90% of the time. For that (i.e. for all real world Newcomblike problems), you need a more sophisticated decision theory.
Well no, not necessarily. Again, let’s take Alf’s view. (Note I edited this post recently to correct the display of the matrices)
Remember that Alf has a high probability of 2 boxing, and he knows this about himself. Whether he would actually do better by 1-boxing will depend how well Omega’s “mistaken” simulations are correlated with the (rare, freaky) event that Alf 1 boxes. Basically, Alf knows that Omega is right at least 90% of the time, but this doesn’t require a very sophisticated simulation at all, certainly not in Alf’s own case. Omega can run a very crude simulation, say “a clear” 2-boxer, and not fill box B (so Alf won’t get the $ 1 million. Basically, the game outcome would have a probability matrix like this:
Notice that Omega has less than 1% chance of a mistaken prediction.
But, I’m sure you’re thinking, won’t Omega run a fuller simulation with 90% accuracy and produce a probability matrix like this?
Well Omega could do that, but now its probability of error has gone up from 1% to 10%, so why would Omega bother?
Let’s compare to a more basic case: weather forecasting. Say I have a simulation model which takes in the current atmospheric state above a land surface, runs it forward a day, and tries to predict rain. It’s pretty good: if there is going to be rain, then the simulation predicts rain 90% of the time; if there is not going to be rain, then it predicts rain only 10% of the time. But now someone shows me a desert, and asks me to predict rain: I’m not going to use a simulation with a 10% error rate, I’m just going to say “no rain”.
So it seems in the case of Alf. Provided Alf’s chance of 1-boxing is low enough (i.e. lower than the underlying error rate of Omega’s simulations) then Omega can always do best by just saying “a clear 2-boxer” and not filling the B box. Omega may also say to himself “what an utter schmuck” but he can’t fault Alf’s application of decision theory. And this applies whether or not Alf is a causal decision theorist or an evidential decision theorist.