I’m having a hard time making sense of what you’re arguing here:
The problem was about dynamic inconsistency in beliefs, while you are talking about a solution to dynamic inconsistency in actions.
I don’t see any inconsistency in beliefs. Initially, everyone thinks that the probability that the urn with 18 green balls is chosen is 1⁄2. After someone picks a green ball, they revise this probability to 9⁄10, which is not an inconsistency, since they have new evidence, so of course they may change their belief. This revision of belief should be totally uncontroversial. If you think a person who picks a green ball shouldn’t revise their probability in this way then you are abandoning the whole apparatus of probability theory developed over the last 250 years. The correct probability is 9⁄10. Really. It is.
I take the whole point of the problem to be about whether people who for good reason initially agreed on some action, conditional on the future event of picking a green ball, will change their mind once that event actually occurs—despite that event having been completely anticipated (as a possibility) when they thought about the problem beforehand. If they do, that would seem like an inconsistency. What is controversial is the decision theory aspect of the problem, not the beliefs.
Your assumption that people act independently from each other, which was not part of the original problem, - it was even explicitly mentioned that people have enough time to discuss the problem and come to a coordinated solution, before the experiment started, - allowed you to ignore this nuance.
As I explain above, the whole point of the problem is whether or not people might change their minds about whether or not to take the bet after seeing that they picked a green ball, despite the prior coordination. If you build into the problem description that they aren’t allowed to change their minds, then I don’t know what you think you’re doing.
My only guess would be that you are focusing not on the belief that the urn with 18 green balls was chosen, but rather on the belief in the proposition “it would be good (in expectation) if everyone with a green ball takes the bet”. Initially, it is rational to think that it would not be good for everyone to take the bet. But someone who picks a green ball should give probability 9⁄10 to the proposition that the urn with 18 balls was chosen, and therefore also to the proposition that everyone taking the bet would result in a gain, not a loss, and one can work out that the expected gain is also positive. So they will also think “if I could, I would force everyone with a green ball to take the bet”. Now, the experimental setup is such that they can’t force everyone with a green ball to take the bet, so this is of no practical importance. But one might nevertheless think that there is an inconsistency.
But there actually is no inconsistency. Seeing that you picked a green ball is relevant evidence, that rightly changes your belief in whether it would be good for everyone to take the bet. And in this situation, if you found some way to cheat and force everyone to take the bet (and had no moral qualms about doing so), that would in fact be the correct action, producing an expected reward of 5.6, rather than zero.
I don’t see any inconsistency in beliefs. Initially, everyone thinks that the probability that the urn with 18 green balls is chosen is 1⁄2. After someone picks a green ball, they revise this probability to 9⁄10, which is not an inconsistency, since they have new evidence, so of course they may change their belief. This revision of belief should be totally uncontroversial. If you think a person who picks a green ball shouldn’t revise their probability in this way then you are abandoning the whole apparatus of probability theory developed over the last 250 years. The correct probability is 9⁄10. Really. It is.
I don’t like this way of argument by authority and sheer repetition.
That said, I feel totally confused about the matter so I can’t say whether I agree or not.
Well, for starters, I’m not sure that Ape in the coat disagrees with my statements above. The disagreement may lie elsewhere, in some idea that it’s not the probability of the urn with 18 green balls being chosen that is relevant, but something else that I’m not clear on. If so, it would be helpful if Ape in the coat would confirm agreement with my statement above, so we could progress onwards to the actual disagreement.
If Ape in the coat does disagree with my statement above, then I really do think that that is in the same category as people who think the “Twin Paradox” disproves special relativity, or that quantum mechanics can’t possibly be true because it’s too weird. And not in the sense of thinking that these well-established physical theories might break down in some extreme situation not yet tested experimentally—the probability calculation above is of a completely mundane sort entirely analogous to numerous practical applications of probability theory. Denying it is like saying that electrical engineers don’t understand how resistors work, or that civil engineers are wrong about how to calculate stresses in bridges.
I’m having a hard time making sense of what you’re arguing here:
I don’t see any inconsistency in beliefs. Initially, everyone thinks that the probability that the urn with 18 green balls is chosen is 1⁄2. After someone picks a green ball, they revise this probability to 9⁄10, which is not an inconsistency, since they have new evidence, so of course they may change their belief. This revision of belief should be totally uncontroversial. If you think a person who picks a green ball shouldn’t revise their probability in this way then you are abandoning the whole apparatus of probability theory developed over the last 250 years. The correct probability is 9⁄10. Really. It is.
I take the whole point of the problem to be about whether people who for good reason initially agreed on some action, conditional on the future event of picking a green ball, will change their mind once that event actually occurs—despite that event having been completely anticipated (as a possibility) when they thought about the problem beforehand. If they do, that would seem like an inconsistency. What is controversial is the decision theory aspect of the problem, not the beliefs.
As I explain above, the whole point of the problem is whether or not people might change their minds about whether or not to take the bet after seeing that they picked a green ball, despite the prior coordination. If you build into the problem description that they aren’t allowed to change their minds, then I don’t know what you think you’re doing.
My only guess would be that you are focusing not on the belief that the urn with 18 green balls was chosen, but rather on the belief in the proposition “it would be good (in expectation) if everyone with a green ball takes the bet”. Initially, it is rational to think that it would not be good for everyone to take the bet. But someone who picks a green ball should give probability 9⁄10 to the proposition that the urn with 18 balls was chosen, and therefore also to the proposition that everyone taking the bet would result in a gain, not a loss, and one can work out that the expected gain is also positive. So they will also think “if I could, I would force everyone with a green ball to take the bet”. Now, the experimental setup is such that they can’t force everyone with a green ball to take the bet, so this is of no practical importance. But one might nevertheless think that there is an inconsistency.
But there actually is no inconsistency. Seeing that you picked a green ball is relevant evidence, that rightly changes your belief in whether it would be good for everyone to take the bet. And in this situation, if you found some way to cheat and force everyone to take the bet (and had no moral qualms about doing so), that would in fact be the correct action, producing an expected reward of 5.6, rather than zero.
I don’t like this way of argument by authority and sheer repetition.
That said, I feel totally confused about the matter so I can’t say whether I agree or not.
Well, for starters, I’m not sure that Ape in the coat disagrees with my statements above. The disagreement may lie elsewhere, in some idea that it’s not the probability of the urn with 18 green balls being chosen that is relevant, but something else that I’m not clear on. If so, it would be helpful if Ape in the coat would confirm agreement with my statement above, so we could progress onwards to the actual disagreement.
If Ape in the coat does disagree with my statement above, then I really do think that that is in the same category as people who think the “Twin Paradox” disproves special relativity, or that quantum mechanics can’t possibly be true because it’s too weird. And not in the sense of thinking that these well-established physical theories might break down in some extreme situation not yet tested experimentally—the probability calculation above is of a completely mundane sort entirely analogous to numerous practical applications of probability theory. Denying it is like saying that electrical engineers don’t understand how resistors work, or that civil engineers are wrong about how to calculate stresses in bridges.