Omniscient Omega doesn’t entail backwards causality, it only entails omniscience. If Omega can extrapolate how you would choose boxes from complete information about your present, you’re not going to fool it no matter how many times you play the game.
I agree if you say that a more accurate statement would have been “omniscient Omega entails either backwards causality or the absence of free will.”
I actually assign a rather high probability to free will not existing; however discussing decision theory under that assumption is not interesting at all.
Regardless of the issue of free will (which I don’t want to discuss because it is obviously getting us nowhere), if Omega makes its prediction solely based on your past, then your past suddenly becomes an inherent part of the problem. This means that two-boxing-You either has a different past than one-boxing-You and therefore plays a different game, or that Omega makes the same prediction for both versions of you, in which case two-boxing-You wins.
Two-boxing-you is a different you than one-boxing-you. They make different decisions in the same scenario, so something about them must not be the same.
Omega doesn’t make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there’s anything that will affect your decision, Omega knows about it.
If you know that Omega will correctly predict your actions, then you can draw a decision tree which crosses off the outcomes “I choose to two box and both boxes contain money,” and “I choose to one box and the other box contains no money,” because you can rule out any outcome that entails Omega having mispredicted you.
Probability is in the mind. The reality is that either one or both boxes already contain money, and you are already going to choose one box or both, in accordance with Omega’s prediction. Your role is to run through the algorithm to determine what is the best choice given what you know. And given what you know, one boxing has higher expected returns than two boxing.
Omega doesn’t make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there’s anything that will affect your decision, Omega knows about it.
Omega cannot have the future as an input; any knowledge Omega has about the future is a result of logical reasoning based upon its knowledge of the past.
If you know that Omega will correctly predict your actions
You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.
You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.
If you know that Omega will correctly predict your actions
You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.
Yes you can. Something existing that can predict your actions in no way precludes free will. (I suppose definitions of “free will” could be constructed such that predicting negates it, in which case you can still be predicted, don’t have free will and the situation is exactly as interesting as it was before.)
Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.
If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega’s perspective as long as the roll hasn’t been made, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).
If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega’s perspective, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).
If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega’s prediction is a coin toss (p=0.5).
The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call “no free will”.
You are using the wrong sense of “can” in “cannot make different decisions”. The every day subjective experience of “free will” isn’t caused by your decisions being indeterminate in an objective sense, that’s the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of “can make different decisions” to use is something like “if the preference calculation had a different outcome that would result in a different decision”.
Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.
You yourself don’t know the result of the preference calculation before you run it, otherwise it wouldn’t feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.
So apparently you have not followed my advice to consider free will. I really recommend that you read up on this because it seems to cause a significant part of our misunderstanding here.
You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.
You can have “free will” in the sense of being able to do what you want within the realm of possibility, while your wants are set deterministically.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are. Any formulation of “free will” which says I should not be able to do this is simply wrong. If I were making the same offer to Queebles (a species which hates money and loves being shot in the head,) I would predict the reverse. Omega, having very complete information and perfect reasoning, can predict in advance whether you will one-box or two-box.
You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.
You can predict that Kasparov will beat you in a chess match without knowing the specific moves he’ll make. If you could predict all the moves he’d make, you could beat him in a chess match, but you can’t. Similarly, if you could assign nonequal probabilities to how Omega would fill the boxes irrespective of your own choice, then you could act on those probabilities and beat Omega more than half the time, so that would entail a p≠0.5. probability of Omega predicting your choice.
If you play chess against a perfect chess playing machine, which has solved the game of chess, then you can predict in advance that if you decide to play black, black will lose,and if you decide to play white, white will lose, because you know that the machine is playing on a higher level than you. And if you play through Newcomb’s problem with Omega, you can predict that if you one box, both boxes will contain money, and if you two box, only one will. Omega is on a higher level than you, the game has been played, and you already lost.
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is because there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is that there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb’s problem, and you think I’ll expect you to two box because you’ve argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.
If humans followed this kind of recursion infinitely, it would never resolve and you couldn’t do better than maximum entropy in predicting the other person’s decision. But people don’t do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they’ll have an edge and be able to do considerably better than maximum entropy in guessing the other person’s choice.
Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn’t require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don’t get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don’t need Omega’s perfect prediction to avoid shooting the other person, you can just predict that they’ll choose to get shot every time, because whether you’re right or wrong they won’t get shot, and if you want to shoot them, you should always predict that they’ll choose the money, because predicting that they’ll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you’re offered the dilemma, you should always pick the money if you don’t want to get shot, and the bullet if you do want to get shot. It’s a game with a very simple dominant strategy on each side.
I don’t see why you think this would apply to Newcomb. Omega is not an “other person”; it has no motivation, no payoff matrix.
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.
If your decision theory allows you to choose either option
What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). “Choice” doesn’t imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution)).
I agree if you say that a more accurate statement would have been “omniscient Omega entails either backwards causality or the absence of free will.”
I actually assign a rather high probability to free will not existing; however discussing decision theory under that assumption is not interesting at all.
Regardless of the issue of free will (which I don’t want to discuss because it is obviously getting us nowhere), if Omega makes its prediction solely based on your past, then your past suddenly becomes an inherent part of the problem. This means that two-boxing-You either has a different past than one-boxing-You and therefore plays a different game, or that Omega makes the same prediction for both versions of you, in which case two-boxing-You wins.
Two-boxing-you is a different you than one-boxing-you. They make different decisions in the same scenario, so something about them must not be the same.
Omega doesn’t make its decision solely based on your past, it makes the decision based on all information salient to the question. Omega is an omniscient perfect reasoner. If there’s anything that will affect your decision, Omega knows about it.
If you know that Omega will correctly predict your actions, then you can draw a decision tree which crosses off the outcomes “I choose to two box and both boxes contain money,” and “I choose to one box and the other box contains no money,” because you can rule out any outcome that entails Omega having mispredicted you.
Probability is in the mind. The reality is that either one or both boxes already contain money, and you are already going to choose one box or both, in accordance with Omega’s prediction. Your role is to run through the algorithm to determine what is the best choice given what you know. And given what you know, one boxing has higher expected returns than two boxing.
Omega cannot have the future as an input; any knowledge Omega has about the future is a result of logical reasoning based upon its knowledge of the past.
You cannot know this, unless you (a) consider backwards causality, which is wrong, or (b) consider absence of free will, which is uninteresting.
You can also not know that Omega will correctly predict your choice with p≠0.5. At best, you can only know that Omega predicts you to one-box/two-box with p=whatever.
Yes you can. Something existing that can predict your actions in no way precludes free will. (I suppose definitions of “free will” could be constructed such that predicting negates it, in which case you can still be predicted, don’t have free will and the situation is exactly as interesting as it was before.)
Let us assume a repeated game where an agent is presented with a decision between A and B, and Omega observes that the agent chooses A in 80% and B in 20% of the cases.
If Omega now predicts the agent to choose A in the next instance of the game, then the probability of the prediction being correct is 80% - from Omega’s perspective as long as the roll hasn’t been made, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 100% (A) or 0% (B).
If, instead, Omega is a ten-sided die with 8 A-sides and 2 B-sides, then the probability of the prediction being correct is 68% - from Omega’s perspective, and from the agent’s perspective as long as no decision has been made. However, once the decision has been made, the probability of the prediction being correct from the perspective of the agent is either 80% (A) or 20% (B).
If the agent knows that Omega makes the prediction before the agent makes the decision, then the agent cannot make different decisions without affecting the probability of the prediction being correct, unless Omega’s prediction is a coin toss (p=0.5).
The only case where the probability of Omega being correct is unchangeable with p≠0.5 is the case where the agent cannot make different decisions, which I call “no free will”.
You are using the wrong sense of “can” in “cannot make different decisions”. The every day subjective experience of “free will” isn’t caused by your decisions being indeterminate in an objective sense, that’s the incoherent concept of libertarian free will. Instead it seems to be based on our decisions being dependent on some sort of internal preference calculation, and the correct sense of “can make different decisions” to use is something like “if the preference calculation had a different outcome that would result in a different decision”.
Otherwise results that are entirely random would feel more free than results that are based on your values, habits, likes, memories and other character traits, i. e. the things that make you you. Not at all coincidentally this is also the criterion whether it makes sense to bother thinking about the decision.
You yourself don’t know the result of the preference calculation before you run it, otherwise it wouldn’t feel like a free decision. But whether Omega knows the result in advance has no impact on that at all.
So apparently you have not followed my advice to consider free will. I really recommend that you read up on this because it seems to cause a significant part of our misunderstanding here.
You can have “free will” in the sense of being able to do what you want within the realm of possibility, while your wants are set deterministically.
If I offer most people a choice between receiving a hundred dollars, or being shot in the head, I can predict with near certainty that they will choose the hundred dollars, because I know enough about what kind of agents they are. Any formulation of “free will” which says I should not be able to do this is simply wrong. If I were making the same offer to Queebles (a species which hates money and loves being shot in the head,) I would predict the reverse. Omega, having very complete information and perfect reasoning, can predict in advance whether you will one-box or two-box.
You can predict that Kasparov will beat you in a chess match without knowing the specific moves he’ll make. If you could predict all the moves he’d make, you could beat him in a chess match, but you can’t. Similarly, if you could assign nonequal probabilities to how Omega would fill the boxes irrespective of your own choice, then you could act on those probabilities and beat Omega more than half the time, so that would entail a p≠0.5. probability of Omega predicting your choice.
If you play chess against a perfect chess playing machine, which has solved the game of chess, then you can predict in advance that if you decide to play black, black will lose,and if you decide to play white, white will lose, because you know that the machine is playing on a higher level than you. And if you play through Newcomb’s problem with Omega, you can predict that if you one box, both boxes will contain money, and if you two box, only one will. Omega is on a higher level than you, the game has been played, and you already lost.
The reason why you lose in chess is because you will make the wrong moves, and the reason why you will make the wrong moves is because there are much too many of them to make it likely enough that you will find the right ones by chance. This is not the case in a game that consists of only two different moves.
What if you also tell them that you’ve made a prediction about them, and if your prediction is correct, they will get the money and not be shot even if their decision was to get shot? (If your prediction was wrong, the same happens as in your original game.)
What if you were in that very situation, with Omega, whose predictions are always right, holding the money and the gun? Could you make a distinction between the choices offered to you?
In a game with two moves, you want to model the other person, and play one level higher than that. So if I take the role of Omega and put you in Newcomb’s problem, and you think I’ll expect you to two box because you’ve argued in favor of two boxing, then you expect me to put money in only one box, so you want to one box, thereby beating your model of me. But if I expect you to have thought that far, then I want to put money in both boxes, making two boxing the winning move, thereby beating my model of you. And you expect me to have thought that far, you want to play a level above your model of me and one box again.
If humans followed this kind of recursion infinitely, it would never resolve and you couldn’t do better than maximum entropy in predicting the other person’s decision. But people don’t do that, humans tend to follow very few levels of recursion when modeling others (example here, you can look at the comments for the results.) So if one person is significantly better at modeling the other, they’ll have an edge and be able to do considerably better than maximum entropy in guessing the other person’s choice.
Omega is a hypothetical entity who models the universe perfectly. If you decide to one box, his model of you decides to one box, so he plays a level above that and puts money in both boxes. If you decide to two box, his model of you decides to two box, so he plays a level above that and only puts money in one box. Any method of resolving the dilemma that you apply, his model of you also applies; if you decide to flip a coin, his model of you also decides to flip a coin, and because Omega models the whole universe perfectly, not just you, the coin in his model shows the same face as the coin you actually flip. This does essentially require Omega to be able to fold up the territory and put it in his pocket, but it doesn’t require any backwards causality. Real life Newcomblike dilemmas involve predictors who are very reliable, but not completely infallible.
I could choose either, knowing that the results would be the same either way. Either I choose the money, in which case Omega has predicted that I will choose the money, and I get the money and don’t get shot, or I choose the bullet, in which case, Omega has predicted that I choose the bullet, and I will get the money and not get shot. In this case, you don’t need Omega’s perfect prediction to avoid shooting the other person, you can just predict that they’ll choose to get shot every time, because whether you’re right or wrong they won’t get shot, and if you want to shoot them, you should always predict that they’ll choose the money, because predicting that they’ll choose the money and having them choose the bullet is the only branch that results in shooting them. Similarly, if you’re offered the dilemma, you should always pick the money if you don’t want to get shot, and the bullet if you do want to get shot. It’s a game with a very simple dominant strategy on each side.
I don’t see why you think this would apply to Newcomb. Omega is not an “other person”; it has no motivation, no payoff matrix.
Really? If your decision theory allows you to choose either option, then how could Omega possibly predict your decision?
Whatever its reasons, Omega wants to set up the boxes so that if you one box, both boxes have money, and if you two box, only one box has money. It can be said to have preferences insofar as they lead to it using its predictive powers to try to do that.
I can’t play at a higher level than Omega’s model of me. Like playing against a stronger chess player, I can only predict that they will win. Any step where I say “It will stop here, so I’ll do this instead,” it won’t stop there, and Omega will turn out to be playing at a higher level than me.
Because on some level my choice is going to be nonrandom (I am made of physical particles following physical rules,) and if Omega is an omniscient perfect reasoner, it can determine my choice in advance even if I can’t.
But as it happens, I would choose the money, because choosing the money is a dominant strategy for anything up to absolute certainty in the other party’s predictive abilities, and I’m not inclined to start behaving differently as soon as I theoretically have absolute certainty.
What you actually choose is one particular option (you may even strongly suspect in advance which one; and someone else might know it even better). “Choice” doesn’t imply lack of determinism. If what you choose is something definite, it could as well be engraved on a stone tablet in advance, if it was possible to figure out what the future choice turns out to be. See Free will (and solution)).