It seems like the problem is that whether each person gets to make a decision depends on the evidence they think they have, in such a way to make that evidence meaningless. To construct an extreme example: The Antecedent Mugger gathers a billion people in a room together, and says:
“I challenge you to a game of wits! In this jar is a variable amount of coins, between $0 and $10,000. I will allow each of you to weigh the jar using this set of extremely imprecise scales. Then I will ask each of you whether to accept my offer: to as a group buy the jar off me for $5000, the money to be distributed equally among you. Note: although I will ask all of you, the only response I will consider is the one given by the person with the greatest subjective expected utility from saying ‘yes’.”
In this case, even if the jar always contains $0, there will always be someone who receives enough information from the scales to think the jar contains >$5000 with high probability, and therefore to say yes. Since that person’s response is the one that is taken for the whole group, the group always pays out $5000, resulting in a money pump in favour of the Mugger.
The problem is that, from an outside perspective, the observations of the one who gets to make the choice are almost completely uncorrelated from the actual contents of the jar, due to the Mugger’s selection process. For any general strategy Observations → Response, the Mugger can always summon enough people to find someone who has seen the observations that will produce the response he wants, unless the strategy is a constant function.
Similarly, in the problem with the marbles, only the people with the observation Green get any influence, so the observations of “people who get to make a decision” are uncorrelated with the actual contents of the buckets (even though observations of the participants in general are correlated with the buckets).
The problem here is that your billion people are for some reason giving the answer most likely to be correct rather than the answer most likely to actually be profitable. If they were a little more savvy, they could reason as follows:
“The scales tell me that there’s $6000 worth of coins in the jar, so it seems like a good idea to buy the jar. However, if I did not receive the largest weight estimate from the scales, my decision is irrelevant; and if I did receive the largest weight estimate, then conditioned on that it seems overwhelmingly likely that there are many fewer coins in the jar than I’d think based on that estimate—and in that case, I ought to say no.”
Ooh, and we can apply similar reasoning to the marble problem if we change it, in a seemingly isomorphic way, so that instead of making the trade based on all the responses of the people who saw a green marble, Psy-Kosh selects one of the green-marble-observers at random and considers that person’s response (this should make no difference to the outcomes, assuming that the green-marblers can’t give different responses due to no-spontaneous-symmetry-breaking and all that).
Then, conditioning on drawing a green marble, person A infers a 9⁄10 probability that the bucket contained 18 green and 2 red marbles. However, if the bucket contains 18 green marbles, person A has a 1⁄18 chance of being randomly selected given that she drew a green marble, whereas if the bucket contains 2 green marbles, she has a 1⁄2 chance of being selected. So, conditioning on her response being the one that matters as well as the green marble itself, she infers a (9:1) * (1/18)/(1/2) = (9:9) odds ratio, that is probability 1⁄2 the bucket contains 18 green marbles.
Which leaves us back at a kind of anthropic updating, except that this time it resolves the problem instead of introducing it!
Huh. Reading this again, together with byrnema’s pointer discussion and Psy-Kosh’s non-anthropic reformulation...
It seems like the problem is that whether each person gets to make a decision depends on the evidence they think they have, in such a way to make that evidence meaningless. To construct an extreme example: The Antecedent Mugger gathers a billion people in a room together, and says:
“I challenge you to a game of wits! In this jar is a variable amount of coins, between $0 and $10,000. I will allow each of you to weigh the jar using this set of extremely imprecise scales. Then I will ask each of you whether to accept my offer: to as a group buy the jar off me for $5000, the money to be distributed equally among you. Note: although I will ask all of you, the only response I will consider is the one given by the person with the greatest subjective expected utility from saying ‘yes’.”
In this case, even if the jar always contains $0, there will always be someone who receives enough information from the scales to think the jar contains >$5000 with high probability, and therefore to say yes. Since that person’s response is the one that is taken for the whole group, the group always pays out $5000, resulting in a money pump in favour of the Mugger.
The problem is that, from an outside perspective, the observations of the one who gets to make the choice are almost completely uncorrelated from the actual contents of the jar, due to the Mugger’s selection process. For any general strategy
Observations → Response
, the Mugger can always summon enough people to find someone who has seen the observations that will produce the response he wants, unless the strategy is a constant function.Similarly, in the problem with the marbles, only the people with the observation
Green
get any influence, so the observations of “people who get to make a decision” are uncorrelated with the actual contents of the buckets (even though observations of the participants in general are correlated with the buckets).The problem here is that your billion people are for some reason giving the answer most likely to be correct rather than the answer most likely to actually be profitable. If they were a little more savvy, they could reason as follows:
“The scales tell me that there’s $6000 worth of coins in the jar, so it seems like a good idea to buy the jar. However, if I did not receive the largest weight estimate from the scales, my decision is irrelevant; and if I did receive the largest weight estimate, then conditioned on that it seems overwhelmingly likely that there are many fewer coins in the jar than I’d think based on that estimate—and in that case, I ought to say no.”
Ooh, and we can apply similar reasoning to the marble problem if we change it, in a seemingly isomorphic way, so that instead of making the trade based on all the responses of the people who saw a green marble, Psy-Kosh selects one of the green-marble-observers at random and considers that person’s response (this should make no difference to the outcomes, assuming that the green-marblers can’t give different responses due to no-spontaneous-symmetry-breaking and all that).
Then, conditioning on drawing a green marble, person A infers a 9⁄10 probability that the bucket contained 18 green and 2 red marbles. However, if the bucket contains 18 green marbles, person A has a 1⁄18 chance of being randomly selected given that she drew a green marble, whereas if the bucket contains 2 green marbles, she has a 1⁄2 chance of being selected. So, conditioning on her response being the one that matters as well as the green marble itself, she infers a (9:1) * (1/18)/(1/2) = (9:9) odds ratio, that is probability 1⁄2 the bucket contains 18 green marbles.
Which leaves us back at a kind of anthropic updating, except that this time it resolves the problem instead of introducing it!