I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.
I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.