I wonder if people’s responses will change if they can verify that the unknown proportion of green/blue was chosen using “fair” randomness. When I imagine a bastard experimenter in the loop, I lean toward Nash equilibrium considerations like “choosing red is less exploitable than choosing green” and “choosing green+blue is less exploitable than choosing red+blue”.
If you know the experimenter is trying to exploit you, then the fact that they posed the question as “red or green” decreases the expected number of green balls. On the other hand if they posed the question “green+blue or red+blue”, it increases your expected number of green balls. So this is entirely consistent with Bayesian probability, conditioned on which question the evil experimenter asked you. This is the same reason why you shouldn’t necessarily want to bet either way on a proposition if the person offering the bet might have information about it that you don’t have.
If the experimenter knows what distribution you expect, they may decide not to use that distribution. And unless you’re the first person in the experiment, they have in fact been learning what distributions people expect, though not you in particular.
What you could do is run your own similar experiment on regular subjects first so you know what the experimenter is likely to expect, and then impersonate a regular subject when you are called into the experiment, up until the point you get offered a bet. I don’t think they would have accounted for that possibility, and even if they did it would be rare enough to still be unexpected.
But make sure the financial incentives to do this aren’t enough that other people do the same thing or it will ruin your plan. You have to be satisfied with outwitting the experimenter.
And no matter how small a probability someone assigns to “the randomness is unfair ’cause the experimenter is a dick”, picking red will yield epsilon more expected money than picking green. You need a probability of exactly zero for the choices to be equivalent.
Of course, in an abstract thought experiment as described, a probability of zero is indeed implied, but people don’t pay attention to instructions, as anybody doing tech support will tell you—they invent stuff that was never said, and ignore other bits (I’m guilty of that myself—we all are, I think).
The obvious type of fair randomness is a symmetrical distribution (equally likely to give N more blue than green, as to give N more green than blue), and this gives equal chances of blue or green to come out of the bag. If I knew it was a double blind experiment, so the experimenter doesn’t know the contents of the bag, I would treat red, blue and green as each having known probability of one third. If the offers might depend on the experimenter’s knowledge of the bag contents I would not.
This is exactly my reaction too: when faced with any situation where another agent might influence outcomes, people naturally think more in terms of game theory and minimaxing than probabilities. (Of course, here minimaxing is applied to probabilistic gambles, but the volunteer presumes the “30 red balls” rule to be less subject to manipulation than the balance of green vs. blue.)
I wonder if people’s responses will change if they can verify that the unknown proportion of green/blue was chosen using “fair” randomness. When I imagine a bastard experimenter in the loop, I lean toward Nash equilibrium considerations like “choosing red is less exploitable than choosing green” and “choosing green+blue is less exploitable than choosing red+blue”.
If you know the experimenter is trying to exploit you, then the fact that they posed the question as “red or green” decreases the expected number of green balls. On the other hand if they posed the question “green+blue or red+blue”, it increases your expected number of green balls. So this is entirely consistent with Bayesian probability, conditioned on which question the evil experimenter asked you. This is the same reason why you shouldn’t necessarily want to bet either way on a proposition if the person offering the bet might have information about it that you don’t have.
If the experimenter knows what distribution you expect, they may decide not to use that distribution. And unless you’re the first person in the experiment, they have in fact been learning what distributions people expect, though not you in particular.
What you could do is run your own similar experiment on regular subjects first so you know what the experimenter is likely to expect, and then impersonate a regular subject when you are called into the experiment, up until the point you get offered a bet. I don’t think they would have accounted for that possibility, and even if they did it would be rare enough to still be unexpected.
But make sure the financial incentives to do this aren’t enough that other people do the same thing or it will ruin your plan. You have to be satisfied with outwitting the experimenter.
And no matter how small a probability someone assigns to “the randomness is unfair ’cause the experimenter is a dick”, picking red will yield epsilon more expected money than picking green. You need a probability of exactly zero for the choices to be equivalent.
Of course, in an abstract thought experiment as described, a probability of zero is indeed implied, but people don’t pay attention to instructions, as anybody doing tech support will tell you—they invent stuff that was never said, and ignore other bits (I’m guilty of that myself—we all are, I think).
If the experimenter is a dick, then both boxes contain a dagger; or, as it may be, a boot.
The obvious type of fair randomness is a symmetrical distribution (equally likely to give N more blue than green, as to give N more green than blue), and this gives equal chances of blue or green to come out of the bag. If I knew it was a double blind experiment, so the experimenter doesn’t know the contents of the bag, I would treat red, blue and green as each having known probability of one third. If the offers might depend on the experimenter’s knowledge of the bag contents I would not.
This is exactly my reaction too: when faced with any situation where another agent might influence outcomes, people naturally think more in terms of game theory and minimaxing than probabilities. (Of course, here minimaxing is applied to probabilistic gambles, but the volunteer presumes the “30 red balls” rule to be less subject to manipulation than the balance of green vs. blue.)