This is equivalent to “what if Omega can’t predict what I would do?” implicit reasoning done by two-boxers in Newcomb. Neither possibility is in the solution domain, provided that one does not fight the hypothetical (in your case “the players are rational” in Newcomb’s “Omega is a perfect predictor”). Player two does not get to move, so there is no point considering that. Omega knows exactly what you’d do, so no point considering what to do if he is wrong.
And this is the problem I have with all the Newcomb/Omega business.
The hypothetical should be fought. We should no more assign absolute certainty to Omega’s predictive power than we should assign absolute certainty to CDT’s predictive power.
Instead of assigning 0 probability to the theory that Omega can make a mistake, assign DeltaOmega. Similarly assign DeltaCDT to the probability that CDT analysis is wrong. I’m too lazy to actually do the math, but do you have any doubt that the right decision will depend on the ratio of the two deltas?
There’s really nothing to see here. .This is just another case of generating paradoxes in probability theory when you don’t do a full analysis using finite, non zero probability assignments.
This is a similar issue the OP has come up against. Proposition1 is that A obeys certain game theoretic rules. Proposition2 is the report that implicates A violating those rules. When your propositions seem mutually contradictory, because you have lazily assigned 0 probability to them, hilarity ensues. Assign finite values, and the mysteries are resolved.
You are basically using the trembling hand equilibrium concept. I picked the payoffs so this would not yield an easy solution. Consider an equilibrium where Player 1 intends to pick A, but there is a small but equal chance he will pick B or C by mistake. In this equilibrium Player 2 would pick Y if he got to move, but then Player 1 would always intend to Pick C, effectively pretending he had made a mistake.
Player two does not get to move, so there is no point considering that.
First, you are implicitly using circular reasoning. You can not tell me that picking B or C is irrational until you tell me what beliefs Player 2 would have if B or C were picked.
Also, imagine you are playing the game against someone you think is rational. You are Player 2. You are told that A was not picked. What do you do?
Also, imagine you are playing the game against someone you think is rational. You are Player 2. You are told that A was not picked. What do you do?
If I think Player 1 is rational, I assume he must be modeling my decision-making process somehow. If his model of my decision-making process has picking B or C seems rational, he must be modeling my choice of X and Y in a way that gives him a chance of a higher payoff than he can get by choosing A. Since every combination of (B,C) and (X,Y) is lower than his return from A except [C,Y], no model of my decision-making process would make B a good option, while some models (though inaccurate) would recommend C as a potentially good option. So while it’s uncertain, it’s very likely I’m at C. In that case, I should pick X, and shake my head at my opponent for drastically discounting how rational I am, if he thought he could somehow go one level higher and get the big payoff.
imagine you are playing the game against someone you think is rational. You are Player 2. You are told that A was not picked.
That’s the contradiction right there. If you are player 2 and get to move, Player 1 is not rational, because you can always reduce their payoff by picking X.
Your behavior in impossible-in-reality but in some sense possible-to-think-about situations may well influence others’ decisions, so it may be useful to decide what to do in impossible situations if you expect to be dealing with others who are moved by such considerations. Since decisions make their alternatives impossible, but are based on evaluation of those alternatives, considering situations that eventually turn out to be impossible (as a result of being decided to become impossible) is a very natural thing to do.
I was making the more general point that impossible situations (abstract arguments that aren’t modeled by any of the “possible” situations being considered) can matter, that impossibility is not necessarily significant. Apart from that, I agree that we don’t actually have a good argument for impossibility of any given action by Player 1, if it depends on what Player 2 could be thinking.
Because for Player 1 to increase his payoff over picking A, the only option he can choose is C, based on an accurate prediction via some process of reasoning that player 2 will pick X, thereby making a false prediction about Player 1′s behaviour. You have stated both players are rational, so I will assume they have equal powers of reason, in which case if it is possible for Player 2 to make a false prediction based on their powers of reason then Player 1 must be equally capable of making a wrong prediction, meaning that Player 1 should avoid the uncertainty and always go for the guaranteed payoff.
To formulate this mathematically you would need to determine the probability of making a false prediction and factor that into the odds, which I regret is beyond my ability.
That’s the contradiction right there. If you are player 2 and get to move, Player 1 is not rational, because you can always reduce their payoff by picking X.
Note that “each player cares only about maximizing his own payoff”. By assumption, player 2 has only a selfish preference, not a sadistic one, so they’ll only choose X (or be more likely to choose X) if they expect that to improve their own expected score. If player 1 can credibly expect player 2 to play Y often enough when given the opportunity, it is not irrational for player 1 to give player 2 that opportunity by playing B or C.
Please answer the question, what would you do if you are player 2 and get to move? Might you pick Y? And if so, how can you conclude that Player 1 was irrational to not pick A?
what would you do if you are player 2 and get to move?
I will realize that I was lied to, and the player 1 is not rational. Now, if you are asking what player 2 should do in a situation where Player 1 does not follow the best possible strategy, I think Eliezer’s solution above works in this case. Or Emile’s. It depends on how you model irrationality.
I don’t agree since you can’t prove that not picking A is irrational until you tell me what player 2 would do if he gets to move and we can’t answer this last question.
This is equivalent to “what if Omega can’t predict what I would do?” implicit reasoning done by two-boxers in Newcomb. Neither possibility is in the solution domain, provided that one does not fight the hypothetical (in your case “the players are rational” in Newcomb’s “Omega is a perfect predictor”). Player two does not get to move, so there is no point considering that. Omega knows exactly what you’d do, so no point considering what to do if he is wrong.
And this is the problem I have with all the Newcomb/Omega business.
The hypothetical should be fought. We should no more assign absolute certainty to Omega’s predictive power than we should assign absolute certainty to CDT’s predictive power.
Instead of assigning 0 probability to the theory that Omega can make a mistake, assign DeltaOmega. Similarly assign DeltaCDT to the probability that CDT analysis is wrong. I’m too lazy to actually do the math, but do you have any doubt that the right decision will depend on the ratio of the two deltas?
There’s really nothing to see here. .This is just another case of generating paradoxes in probability theory when you don’t do a full analysis using finite, non zero probability assignments.
This is a similar issue the OP has come up against. Proposition1 is that A obeys certain game theoretic rules. Proposition2 is the report that implicates A violating those rules. When your propositions seem mutually contradictory, because you have lazily assigned 0 probability to them, hilarity ensues. Assign finite values, and the mysteries are resolved.
You are basically using the trembling hand equilibrium concept. I picked the payoffs so this would not yield an easy solution. Consider an equilibrium where Player 1 intends to pick A, but there is a small but equal chance he will pick B or C by mistake. In this equilibrium Player 2 would pick Y if he got to move, but then Player 1 would always intend to Pick C, effectively pretending he had made a mistake.
First, you are implicitly using circular reasoning. You can not tell me that picking B or C is irrational until you tell me what beliefs Player 2 would have if B or C were picked.
Also, imagine you are playing the game against someone you think is rational. You are Player 2. You are told that A was not picked. What do you do?
If I think Player 1 is rational, I assume he must be modeling my decision-making process somehow. If his model of my decision-making process has picking B or C seems rational, he must be modeling my choice of X and Y in a way that gives him a chance of a higher payoff than he can get by choosing A. Since every combination of (B,C) and (X,Y) is lower than his return from A except [C,Y], no model of my decision-making process would make B a good option, while some models (though inaccurate) would recommend C as a potentially good option. So while it’s uncertain, it’s very likely I’m at C. In that case, I should pick X, and shake my head at my opponent for drastically discounting how rational I am, if he thought he could somehow go one level higher and get the big payoff.
That’s the contradiction right there. If you are player 2 and get to move, Player 1 is not rational, because you can always reduce their payoff by picking X.
Your behavior in impossible-in-reality but in some sense possible-to-think-about situations may well influence others’ decisions, so it may be useful to decide what to do in impossible situations if you expect to be dealing with others who are moved by such considerations. Since decisions make their alternatives impossible, but are based on evaluation of those alternatives, considering situations that eventually turn out to be impossible (as a result of being decided to become impossible) is a very natural thing to do.
But why is not picking A “impossible-in-reality”? You can not answer until you tell me what Player 2′s beliefs would be if A was not picked.
I was making the more general point that impossible situations (abstract arguments that aren’t modeled by any of the “possible” situations being considered) can matter, that impossibility is not necessarily significant. Apart from that, I agree that we don’t actually have a good argument for impossibility of any given action by Player 1, if it depends on what Player 2 could be thinking.
Because for Player 1 to increase his payoff over picking A, the only option he can choose is C, based on an accurate prediction via some process of reasoning that player 2 will pick X, thereby making a false prediction about Player 1′s behaviour. You have stated both players are rational, so I will assume they have equal powers of reason, in which case if it is possible for Player 2 to make a false prediction based on their powers of reason then Player 1 must be equally capable of making a wrong prediction, meaning that Player 1 should avoid the uncertainty and always go for the guaranteed payoff.
To formulate this mathematically you would need to determine the probability of making a false prediction and factor that into the odds, which I regret is beyond my ability.
Note that “each player cares only about maximizing his own payoff”. By assumption, player 2 has only a selfish preference, not a sadistic one, so they’ll only choose X (or be more likely to choose X) if they expect that to improve their own expected score. If player 1 can credibly expect player 2 to play Y often enough when given the opportunity, it is not irrational for player 1 to give player 2 that opportunity by playing B or C.
Please answer the question, what would you do if you are player 2 and get to move? Might you pick Y? And if so, how can you conclude that Player 1 was irrational to not pick A?
I will realize that I was lied to, and the player 1 is not rational. Now, if you are asking what player 2 should do in a situation where Player 1 does not follow the best possible strategy, I think Eliezer’s solution above works in this case. Or Emile’s. It depends on how you model irrationality.
I don’t agree since you can’t prove that not picking A is irrational until you tell me what player 2 would do if he gets to move and we can’t answer this last question.