It was especially cool in that it said that even altruist CDTers can’t account for the rationality of voting in sufficiently large elections.
That’s pretty surprising. I checked out the page, and he unfortunately doesn’t motivate what kind of model he’s using, so it’s hard to verify. From the book:
If the importance of the election is presumed proportionate to the size of the electorate, then for large enough elections, expected-utility calculations cannot justify the effort of voting by appeal to the small but heavily weighted possibility that your vote will be a tiebreaker. The odds of that outcome decrease faster than linearly with the number of voters, so the expected value of your vote as a tiebreaker approaches zero— even taking account of the value to everyone combined, not just yourself. Given enough voters, then, the causal value (even to everyone) of your vote is overshadowed by the inconvenience to you of going out to vote.”
In an election with two choices, in a model where everybody has 50% chance of voting for either side, I don’t think the claim is true. Maybe he’s assuming that the outcomes of elections become easier to predict as they grow larger, because individual variability becomes less important? If everyone has a 51% probability of voting for a certain side, the election would be pretty much guaranteed for an arbitrarily large population, in which case a CDTer wouldn’t have any reason to vote (even if there was a coalition of CDTers who could swing the election). I’m not sure if it’s true that elections in larger countries are more predictable, though.
In an election with two choices, in a model where everybody has 50% chance of voting for either side, I don’t think the claim is true.
I also think that in that case, the odds of a tie don’t decrease faster than linearly, but you need to take into account symmetry arguments and precision arguments. That is:
Suppose there are 2N other voters and everyone else votes by flipping a coin. Then the number of votes for side A will be binomially distributed with distribution (2N,0.5) with mean N, and the votes for side B will be 2N-A, and the net votes A—B will be 2A-2N, with an expected value of 0.
But how likely is it to be 0 exactly (i.e. a tie that you flip to a win)? Well, that’s the probability that A is N exactly, which is a decreasing function of N. Suppose N is 1,000 (i.e. there are 2,000 voters); then it’s 1.7%. Suppose it’s 1,000,000; then it’s 0.05%. But 1.7% divided by a thousand is less than 0.05%.
But from the perspective of everyone in the election, it’s not clear why ‘you chose last.’ Presumably everyone on the side with one extra vote would think “aha, it would have been a tied election if I hadn’t voted,” and splitting that up gives us our linear factor.
As well, this hinged on the probability being 0.5 exactly. If instead it was 50.1% favor for A, the odds of a tie are basically unchanged for the 2,000 voter election (we’ve only shifted the number of expected A voters by 2), but drop to 1e-5 for the 2M voter election, a drop by a factor larger than a thousand. (The expected number of net A voters is now 2,000, which is a much higher barrier to overcome by chance.)
However, symmetry doesn’t help us here. Suppose you have a distribution over the ‘bias’ of the coin the other voters are flipping; a tie is just as unlikely if A is favored as if B is favored, and the more spread out our distribution over the bias is, the worse the odds of a tie are, because for large elections only biases very close to p=0.5 contribute any meaningful chance of a tie.
Consider a 2-option election, with 2N voters, each of whom has probability p of choosing the first option. If p is a fixed number, then as N goes to infinity, (chances of an exact tie times N) go to 0 if N isn’t exactly .5, and to infinity if it is. Since the event of p is exactly .5 has measure 0, this model supports the paradox of voting (PoV).
But! If p itself is drawn from an ordinary continuous distribution with nonzero probability density d around .5, then (chances of an exact tie times N) go to … I think it’s just d/2. Maybe there’s some correction factor that comes into play for bizarre distributions of p, but if we make the conventional assumption that it’s beta-distributed, then d/2 is the answer.
I think that the PoV literature is relying on the “fixed p” model. I think the “uncertain p” model is more realistic, but it’s still worth engaging with “fixed p” and seeing the implications of those assumptions.
As an aside, for really large populations, it would probably be socially optimal to only have a small fraction of the population voting (at least if we ignore things like legitimacy, feeling of participation, etc). As long as that fraction is randomly sampled, you could get good statistical guarantees that the outcome of the election would be the same as if everyone voted. South Korea did a pretty cool experiment where they exposed a representative sample of 500 people to pro- and anti-nuclear experts, and then let them decide how much nuclear power the country should have.
I don’t think this is why CDTs refuses to vote, though.
That’s pretty surprising. I checked out the page, and he unfortunately doesn’t motivate what kind of model he’s using, so it’s hard to verify. From the book:
In an election with two choices, in a model where everybody has 50% chance of voting for either side, I don’t think the claim is true. Maybe he’s assuming that the outcomes of elections become easier to predict as they grow larger, because individual variability becomes less important? If everyone has a 51% probability of voting for a certain side, the election would be pretty much guaranteed for an arbitrarily large population, in which case a CDTer wouldn’t have any reason to vote (even if there was a coalition of CDTers who could swing the election). I’m not sure if it’s true that elections in larger countries are more predictable, though.
I also think that in that case, the odds of a tie don’t decrease faster than linearly, but you need to take into account symmetry arguments and precision arguments. That is:
Suppose there are 2N other voters and everyone else votes by flipping a coin. Then the number of votes for side A will be binomially distributed with distribution (2N,0.5) with mean N, and the votes for side B will be 2N-A, and the net votes A—B will be 2A-2N, with an expected value of 0.
But how likely is it to be 0 exactly (i.e. a tie that you flip to a win)? Well, that’s the probability that A is N exactly, which is a decreasing function of N. Suppose N is 1,000 (i.e. there are 2,000 voters); then it’s 1.7%. Suppose it’s 1,000,000; then it’s 0.05%. But 1.7% divided by a thousand is less than 0.05%.
But from the perspective of everyone in the election, it’s not clear why ‘you chose last.’ Presumably everyone on the side with one extra vote would think “aha, it would have been a tied election if I hadn’t voted,” and splitting that up gives us our linear factor.
As well, this hinged on the probability being 0.5 exactly. If instead it was 50.1% favor for A, the odds of a tie are basically unchanged for the 2,000 voter election (we’ve only shifted the number of expected A voters by 2), but drop to 1e-5 for the 2M voter election, a drop by a factor larger than a thousand. (The expected number of net A voters is now 2,000, which is a much higher barrier to overcome by chance.)
However, symmetry doesn’t help us here. Suppose you have a distribution over the ‘bias’ of the coin the other voters are flipping; a tie is just as unlikely if A is favored as if B is favored, and the more spread out our distribution over the bias is, the worse the odds of a tie are, because for large elections only biases very close to p=0.5 contribute any meaningful chance of a tie.
Consider a 2-option election, with 2N voters, each of whom has probability p of choosing the first option. If p is a fixed number, then as N goes to infinity, (chances of an exact tie times N) go to 0 if N isn’t exactly .5, and to infinity if it is. Since the event of p is exactly .5 has measure 0, this model supports the paradox of voting (PoV).
But! If p itself is drawn from an ordinary continuous distribution with nonzero probability density d around .5, then (chances of an exact tie times N) go to … I think it’s just d/2. Maybe there’s some correction factor that comes into play for bizarre distributions of p, but if we make the conventional assumption that it’s beta-distributed, then d/2 is the answer.
I think that the PoV literature is relying on the “fixed p” model. I think the “uncertain p” model is more realistic, but it’s still worth engaging with “fixed p” and seeing the implications of those assumptions.
As an aside, for really large populations, it would probably be socially optimal to only have a small fraction of the population voting (at least if we ignore things like legitimacy, feeling of participation, etc). As long as that fraction is randomly sampled, you could get good statistical guarantees that the outcome of the election would be the same as if everyone voted. South Korea did a pretty cool experiment where they exposed a representative sample of 500 people to pro- and anti-nuclear experts, and then let them decide how much nuclear power the country should have.
I don’t think this is why CDTs refuses to vote, though.