You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.
You can have “free will” in the sense of being able to do what you want within the realm of possibility, while your wants are set deterministically.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are. Any formulation of “free will” which says I should not be able to do this is simply wrong. If I were making the same offer to Queebles (a species which hates money and loves being shot in the head,) I would predict the reverse. Omega, having very complete information and perfect reasoning, can predict in advance whether you will one-box or two-box.
You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.
You can predict that Kasparov will beat you in a chess match without knowing the specific moves he’ll make. If you could predict all the moves he’d make, you could beat him in a chess match, but you can’t. Similarly, if you could assign nonequal probabilities to how Omega would fill the boxes irrespective of your own choice, then you could act on those probabilities and beat Omega more than half the time, so that would entail a p≠0.5. probability of Omega predicting your choice.
If you play chess against a perfect chess playing machine, which has solved the game of chess, then you can predict in advance that if you decide to play black, black will lose,and if you decide to play white, white will lose, because you know that the machine is playing on a higher level than you. And if you play through Newcomb’s problem with Omega, you can predict that if you one box, both boxes will contain money, and if you two box, only one will. Omega is on a higher level than you, the game has been played, and you already lost.
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is because there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is that there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb’s problem, and you think I’ll expect you to two box because you’ve argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.
If humans followed this kind of recursion infinitely, it would never resolve and you couldn’t do better than maximum entropy in predicting the other person’s decision. But people don’t do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they’ll have an edge and be able to do considerably better than maximum entropy in guessing the other person’s choice.
Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn’t require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don’t get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don’t need Omega’s perfect prediction to avoid shooting the other person, you can just predict that they’ll choose to get shot every time, because whether you’re right or wrong they won’t get shot, and if you want to shoot them, you should always predict that they’ll choose the money, because predicting that they’ll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you’re offered the dilemma, you should always pick the money if you don’t want to get shot, and the bullet if you do want to get shot. It’s a game with a very simple dominant strategy on each side.
I don’t see why you think this would apply to Newcomb. Omega is not an “other person”; it has no motivation, no payoff matrix.
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.
If your decision theory allows you to choose either option
What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). “Choice” doesn’t imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution)).
You can have “free will” in the sense of being able to do what you want within the realm of possibility, while your wants are set deterministically.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are. Any formulation of “free will” which says I should not be able to do this is simply wrong. If I were making the same offer to Queebles (a species which hates money and loves being shot in the head,) I would predict the reverse. Omega, having very complete information and perfect reasoning, can predict in advance whether you will one-box or two-box.
You can predict that Kasparov will beat you in a chess match without knowing the specific moves he’ll make. If you could predict all the moves he’d make, you could beat him in a chess match, but you can’t. Similarly, if you could assign nonequal probabilities to how Omega would fill the boxes irrespective of your own choice, then you could act on those probabilities and beat Omega more than half the time, so that would entail a p≠0.5. probability of Omega predicting your choice.
If you play chess against a perfect chess playing machine, which has solved the game of chess, then you can predict in advance that if you decide to play black, black will lose,and if you decide to play white, white will lose, because you know that the machine is playing on a higher level than you. And if you play through Newcomb’s problem with Omega, you can predict that if you one box, both boxes will contain money, and if you two box, only one will. Omega is on a higher level than you, the game has been played, and you already lost.
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is because there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb’s problem, and you think I’ll expect you to two box because you’ve argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.
If humans followed this kind of recursion infinitely, it would never resolve and you couldn’t do better than maximum entropy in predicting the other person’s decision. But people don’t do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they’ll have an edge and be able to do considerably better than maximum entropy in guessing the other person’s choice.
Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn’t require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.
I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don’t get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don’t need Omega’s perfect prediction to avoid shooting the other person, you can just predict that they’ll choose to get shot every time, because whether you’re right or wrong they won’t get shot, and if you want to shoot them, you should always predict that they’ll choose the money, because predicting that they’ll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you’re offered the dilemma, you should always pick the money if you don’t want to get shot, and the bullet if you do want to get shot. It’s a game with a very simple dominant strategy on each side.
I don’t see why you think this would apply to Newcomb. Omega is not an “other person”; it has no motivation, no payoff matrix.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.
What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). “Choice” doesn’t imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution)).