The error: the expected donation for an individual agent deciding to precommit to “nay” is not 700 dollars. It’s pr(selected as decider) * 700 dollars. Which is 350 dollars.
Why is this the case? Right here:
Next, I will tell everyone their status, without telling the status of others … Each decider will be asked to say “yea” or “nay”.
In all the worlds where you get told you are not a decider (50% of them—equal probability of 9:1 chance or a 1:9 chance) your precommitment is irrelevant. Therefore, the case where everyone precommits to yea is logically equivalent to the case where everyone precommits to nay and then change to yea upon being told they are a decider.
So we can talk about two kinds of precommitments: “I precommit to answering yea/nay” and “I precommit to answer yea/nay given that I am informed I am a decider”. The expected donation in the first precomittment is yea: 455 dollars and nay: 350 dollars; the expected donation in the second precomittment is yea: 910 dollars and nay: 700 dollars.¹
Yes, the first precommittment is half that of the second: because the first goes through a 50% filter, the second starts on the other side of that filter. Of course if you start yea before the filter and nay afterwards, you’re going to get the wrong result.
¹: Summing the expected values of yea. pr(heads) pr(selected as decider) value of ‘yea’ for heads = 0.5 0.1 100 = 5 dollars, pr(tails) pr(selected as decider) value of ‘yea’ for tails = 0.5 0.9 1000 = 450 dollars. Sum = 455 dollars.
In all the worlds where you get told you are not a decider (50% of them—equal probability of 9:1 chance or a 1:9 chance) your precommitment is irrelevant.
How can that be, when other people don’t know whether or not you’re a decider?
Imagine the ten sitting in a room, and two people stand up and say “If I am selected as a decider, I will respond with ‘yea’.” This now forces everyone else to vote ‘yea’ always, since in only 5% of all outcomes (and thus 10% of the outcomes they directly control) does voting ‘nay’ increase the total donation (by 600*.1=60) whereas in the other 45% / 90% of cases it decreases the donation (by 1000*.9=900).
The two people who stood up should then suddenly realize that the expected donation is now $550 instead of $700, and they have made everyone worse off by their declaration.
(One person making the declaration also lowers the expected donation by a lower amount, but the mechanism is clearer with two people.)
I think the basic error with the “vote Y” approach is that it throws away half of the outcome space. If you make trustworthy precommitments the other people are aware of, it should be clear that once two people have committed to Y the best move for everyone else is to vote Y. Likewise, once two people have committed to N the best move for everyone else is to vote N.
But, since the idea of updating on evidence is so seductive, let’s take it another step. We see that before you know whether or not you’re a decider, E(N)>E(Y). One you know you’re a decider, you naively calculate E(Y)>E(N). But now you can ask another question- what are P(N->Y|1) and P(N->Y|9)? That is, the probability you change your answer from N to Y given that you are the only decider and given that you are one of the nine deciders.
It should be clear there is no asymmetry there- both P(N->Y|1) and P(N->Y|9)=1. But without an asymmetry, we have obtained no actionable information. This test’s false positive and false negative rates are aligned exactly as to do nothing for us. Even though it looks like we’re preferentially changing our answer in the favorable circumstance, it’s clear from the probabilities that there’s no preference, and we’re behaving exactly as if we precommitted to vote Y, which we know has a lower EV.
Precommitting to “Yea” is the correct decision.
The error: the expected donation for an individual agent deciding to precommit to “nay” is not 700 dollars. It’s pr(selected as decider) * 700 dollars. Which is 350 dollars.
Why is this the case? Right here:
In all the worlds where you get told you are not a decider (50% of them—equal probability of 9:1 chance or a 1:9 chance) your precommitment is irrelevant. Therefore, the case where everyone precommits to yea is logically equivalent to the case where everyone precommits to nay and then change to yea upon being told they are a decider.
So we can talk about two kinds of precommitments: “I precommit to answering yea/nay” and “I precommit to answer yea/nay given that I am informed I am a decider”. The expected donation in the first precomittment is yea: 455 dollars and nay: 350 dollars; the expected donation in the second precomittment is yea: 910 dollars and nay: 700 dollars.¹
Yes, the first precommittment is half that of the second: because the first goes through a 50% filter, the second starts on the other side of that filter. Of course if you start yea before the filter and nay afterwards, you’re going to get the wrong result.
¹: Summing the expected values of yea. pr(heads) pr(selected as decider) value of ‘yea’ for heads = 0.5 0.1 100 = 5 dollars, pr(tails) pr(selected as decider) value of ‘yea’ for tails = 0.5 0.9 1000 = 450 dollars. Sum = 455 dollars.
How can that be, when other people don’t know whether or not you’re a decider?
Imagine the ten sitting in a room, and two people stand up and say “If I am selected as a decider, I will respond with ‘yea’.” This now forces everyone else to vote ‘yea’ always, since in only 5% of all outcomes (and thus 10% of the outcomes they directly control) does voting ‘nay’ increase the total donation (by 600*.1=60) whereas in the other 45% / 90% of cases it decreases the donation (by 1000*.9=900).
The two people who stood up should then suddenly realize that the expected donation is now $550 instead of $700, and they have made everyone worse off by their declaration.
(One person making the declaration also lowers the expected donation by a lower amount, but the mechanism is clearer with two people.)
It doesn’t make much sense to say “I precommit to answering ‘nay’ iff I am not selected as a decider.”
But then … hmm, yeah. Maybe I have this the wrong way around. Give me half an hour or so to work on it again.
edit: So far I can only reproduce the conundrum. Damn.
I think the basic error with the “vote Y” approach is that it throws away half of the outcome space. If you make trustworthy precommitments the other people are aware of, it should be clear that once two people have committed to Y the best move for everyone else is to vote Y. Likewise, once two people have committed to N the best move for everyone else is to vote N.
But, since the idea of updating on evidence is so seductive, let’s take it another step. We see that before you know whether or not you’re a decider, E(N)>E(Y). One you know you’re a decider, you naively calculate E(Y)>E(N). But now you can ask another question- what are P(N->Y|1) and P(N->Y|9)? That is, the probability you change your answer from N to Y given that you are the only decider and given that you are one of the nine deciders.
It should be clear there is no asymmetry there- both P(N->Y|1) and P(N->Y|9)=1. But without an asymmetry, we have obtained no actionable information. This test’s false positive and false negative rates are aligned exactly as to do nothing for us. Even though it looks like we’re preferentially changing our answer in the favorable circumstance, it’s clear from the probabilities that there’s no preference, and we’re behaving exactly as if we precommitted to vote Y, which we know has a lower EV.