The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says.
So8res’ post points out that
CDT is the academic standard decision theory. Economics, statistics, and philosophy all assume (or, indeed, define) that rational reasoners use causal decision theory to choose between available actions.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
So8res’ post points out that
It seems I’m in good company :-)