If you said earlier in this thread that you would two-box, you are a two-boxer. If you said earlier in this thread that you would one-box, you are a one-boxer.
Oh, I can’t change my mind? I do that on regular basis, you know...
This means you can influence Omega’s prediction (and thus the contents of the boxes) simply by choosing to be a one-boxer.
This implies that I am aware that I’ll face the Newcomb’s problem.
Let’s do the Newcomb’s Problem with a random passer-by picked from the street—he has no idea what’s going to happen to him and has never heard of Omega or the Newcomb’s problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
Oh, I can’t change my mind? I do that on regular basis, you know...
It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
If Omega is a good-enough predictor, he will even be able to predict future changes in your state of mind. Therefore, the decision to one-box can and will affect Omega’s prediction, even if said decision is made AFTER Omega’s prediction.
If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There’s no “not choosing”, because if you make a choice about what to do, you are choosing.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says.
So8res’ post points out that
CDT is the academic standard decision theory. Economics, statistics, and philosophy all assume (or, indeed, define) that rational reasoners use causal decision theory to choose between available actions.
Oh, I can’t change my mind? I do that on regular basis, you know...
This implies that I am aware that I’ll face the Newcomb’s problem.
Let’s do the Newcomb’s Problem with a random passer-by picked from the street—he has no idea what’s going to happen to him and has never heard of Omega or the Newcomb’s problem before. Omega has to make a prediction and fill the boxes before that passer-by gets any hint that something is going to happen.
So, Step 1 happens, the boxes are set up, and our passer-by is explained the whole game. What should he do? He never chose to be a one-boxer or a two-boxer because he had no idea such things existed. He can only make a choice now and the boxes are done and immutable. Why should he one-box?
It seems unlikely to me that you would change your mind about being a one-boxer/two-boxer over the course of a single thread. Nevertheless, if you did so, I apologize for making presuppositions.
As I wrote in my earlier comment:
If our hypothetical passerby chooses to one-box, then to Omega, he is a one-boxer. If he chooses to two-box, then to Omega, he is a two-boxer. There’s no “not choosing”, because if you make a choice about what to do, you are choosing.
The only problem is that you have causality going back in time. At the time of Omega’s decision the passer-by’s state with respect to one- or two-boxing is null, undetermined, does not exist. Omega can scan his brain or whatever and make his prediction, but the passer-by is not bound by that prediction and has not (yet) made any decisions.
The first chance our passer-by gets to make a decision is after the boxes are fixed. His decision (as opposed to his personality, preferences, goals, etc.) cannot affect Omega’s prediction because causality can’t go backwards in time. So at this point, after step 2, the only time he can make a decision, he should two-box.
As far as I’m aware, what you’re saying is basically the same thing as what causal decision theory says. I hate to pass the buck, but So8res has written a very good post on this already; anything I could say here has already been said by him, and better. If you’ve read it already, then I apologize; if not, I’d say give it a skim and see what you think of it.
So8res’ post points out that
It seems I’m in good company :-)