Wait. Presumably the pre-game discussion resolved “never bet”, right? When you say “However, if a participant received a green ball, he shall update the probability of mostly-green-ball urn from 0.5 to 0.9.”, that’s just wrong! Your answer to question 2B is true in some sense, but very misleading in setups where the probabilities are formed based on different numbers of independent observers/predictors. In sleeping beauty, the possibility of multiple wakings confuses things, in this example, the difference between 2 and 18 green-ball-holders does the damage.
It’s the symmetry assumption that shows the problem. If you knew you were in spot 1, then you’d be correct that a green ball is evidence to mostly-green (but you’d have to give up the symmetry argument). Thus, the question is equivalent to “what evidence do I have that I’m in position 1”, which makes the sleeping beauty similarity even more clear.
Whether to use SSA or SIA in anthropic probability remains highly dependent on the question setup.
The Sleeping Beauty problem and this paradox are highly similar, I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction.
For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of “I”. Take who I am—which person’s perspective I am experiencing the world from—as a given, and the ball-assigning process treats “I” and other participants as equals. So there is no need to interpret “I” as a a random sample from all 20 participants. You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe “the evidence....that I’m in position I”.
I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that “whether to use SSA or SIA...dependent on the question setup.”
My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: “I am in position I”, as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two.
I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.
With “However, if a participant received a green ball, he shall update the probability of mostly-green-ball urn from 0.5 to 0.9.” dadadarren is just restating the reasoning from the Outlawing Anthropics post:
Let the dilemma be, “I will ask all people who wake up in green rooms if they are willing to take the bet ‘Create 1 paperclip if the logical coinflip came up heads, destroy 3 paperclips if the logical coinflip came up tails’. (Should they disagree on their answers, I will destroy 5 paperclips.)” Then a paperclip maximizer, before the experiment, wants the paperclip maximizers who wake up in green rooms to refuse the bet. But a conscious paperclip maximizer who updates on anthropic evidence, who wakes up in a green room, will want to take the bet, with expected utility ((90% * +1 paperclip) + (10% * −3 paperclips)) = +0.6 paperclips.
I think you partly agree with dadadarren.
Whether to use SSA or SIA in anthropic probability remains highly dependent on the question setup.
Yes, in a way it is the question setup. But which part? I think dadadarren’s answer is the use of terms like “I” and “now” in an ambiguous way.
Wait. Presumably the pre-game discussion resolved “never bet”, right? When you say “However, if a participant received a green ball, he shall update the probability of mostly-green-ball urn from 0.5 to 0.9.”, that’s just wrong! Your answer to question 2B is true in some sense, but very misleading in setups where the probabilities are formed based on different numbers of independent observers/predictors. In sleeping beauty, the possibility of multiple wakings confuses things, in this example, the difference between 2 and 18 green-ball-holders does the damage.
It’s the symmetry assumption that shows the problem. If you knew you were in spot 1, then you’d be correct that a green ball is evidence to mostly-green (but you’d have to give up the symmetry argument). Thus, the question is equivalent to “what evidence do I have that I’m in position 1”, which makes the sleeping beauty similarity even more clear.
Whether to use SSA or SIA in anthropic probability remains highly dependent on the question setup.
The Sleeping Beauty problem and this paradox are highly similar, I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction.
For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of “I”. Take who I am—which person’s perspective I am experiencing the world from—as a given, and the ball-assigning process treats “I” and other participants as equals. So there is no need to interpret “I” as a a random sample from all 20 participants. You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe “the evidence....that I’m in position I”.
I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that “whether to use SSA or SIA...dependent on the question setup.”
My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: “I am in position I”, as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two.
I think we’re more in agreement than at odds, here. The edict to avoid mixing or switching perspectives seems pretty strong. I’m not sure I have a good mechanism for picking WHICH perspective to apply to which problems, though. The setup of this (and of Sleeping Beauty) is such that using the probability of 0.9 is NOT actually in your own interest.
This is because of the cost of all the times you’d draw red and have to pay for the idiot version of you who drew green—the universe doesn’t care about your choice of perspective; in that sense it’s just incorrect to use that probability.
The only out I know is to calculate the outcomes of both perspectives, including the “relevant counterfactuals”, which is what I struggle to define. Or to just accept predictably-bad outcomes in setups like these (which is what actually happens in a lot of real-world equilibria).
The probability of 0.9 is the correct one to use to derive “my” strategies maximizing “my” personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you’s decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.
With “However, if a participant received a green ball, he shall update the probability of mostly-green-ball urn from 0.5 to 0.9.” dadadarren is just restating the reasoning from the Outlawing Anthropics post:
I think you partly agree with dadadarren.
Yes, in a way it is the question setup. But which part? I think dadadarren’s answer is the use of terms like “I” and “now” in an ambiguous way.