The Sleeping Beauty problem and this paradox are highly similar, I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction.
For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of “I”. Take who I am—which person’s perspective I am experiencing the world from—as a given, and the ball-assigning process treats “I” and other participants as equals. So there is no need to interpret “I” as a a random sample from all 20 participants. You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe “the evidence....that I’m in position I”.
I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that “whether to use SSA or SIA...dependent on the question setup.”
My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: “I am in position I”, as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two.
I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.
The Sleeping Beauty problem and this paradox are highly similar, I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction.
For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of “I”. Take who I am—which person’s perspective I am experiencing the world from—as a given, and the ball-assigning process treats “I” and other participants as equals. So there is no need to interpret “I” as a a random sample from all 20 participants. You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe “the evidence....that I’m in position I”.
I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that “whether to use SSA or SIA...dependent on the question setup.”
My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: “I am in position I”, as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two.
I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.