Like, I feel like with the same type of argument that is made in the post I could write a post saying “there are no voting impossibility theorems” and then go ahead and argue that the Arrow’s Impossibility Theorem assumptions are not universally proven, and then accuse everyone who ever talked about voting impossibility theorems that they are making “an error” since “those things are not real theorems”. And I think everyone working on voting-adjacent impossibility theorems would be pretty justifiedly annoyed by this.
I think that there is some sense in which the character in your example would be right, since:
Arrow’s theorem doesn’t bind approval voting.
Generalizations of Arrow’s theorem don’t bind probabilistic results, e.g., each candidate is chosen with some probability corresponding to the amount of votes he gets.
Like, if you had someone saying there was “a deep core of electoral process” which means that as they scale to important decisions means that you will necessarily get “highly defective electoral processes”, as illustrated in the classic example of the “dangers of the first pass the post system”. Well in that case it would be reasonable to wonder whether the assumptions of the theorem bind, or whether there is some system like approval voting which is much less shitty than the theorem provers were expecting, because the assumptions don’t hold.
The analogy is imperfect, though, since approval voting is a known decent system, whereas for AI systems we don’t have an example friendly AI.
Sorry, this might have not been obvious, but I indeed think the voting impossibility theorems have holes in them because of the lotteries case and that’s specifically why I chose that example.
I think that intellectual point matters, but I also think writing a post with the title “There are no voting impossibility theorems”, defining “voting impossibility theorems” as “theorems that imply that all voting systems must make these known tradeoffs”, and then citing everyone who ever talked about “voting impossibility theorems” as having made “an error” would just be pretty unproductive. I would make a post like the ones that Scott Garrabrant made being like “I think voting impossibility theorems don’t account for these cases”, and that seems great, and I have been glad about contributions of this type.
Like, if you had someone saying there was “a deep core of electoral process” which means that as they scale to important decisions means that you will necessarily get “highly defective electoral processes”, as illustrated in the classic example of the “dangers of the first pass the post system”. Well in that case it would be reasonable to wonder whether the assumptions of the theorem bind, or whether there is some system like approval voting which is much less shitty than the theorem provers were expecting, because the assumptions don’t hold.
Unfortunately, most democratic countries do use first past the post.
The 2 things that are inevitable is condorcet cycles and strategic voting (Though condorcet cycles are less of a problem as you scale up the population, and I have a sneaking suspicion that condorcet cycles go away if we allow a real numbered infinite amount of people.)
I think most democratic countries use proportional representation, not FTPT. But talking about “most” is an FTPT error. Enough countries use proportional representation that you can study the effect of voting systems. And the results are shocking to me. The theoretical predictions are completely wrong. Duverger’s law is false in every FTPT country except America. On the flip side, while PR does lead to more parties, they still form 1-dimensional spectrum. For example, a Green Party is usually a far-left party with slightly different preferences, instead of a single issue party that is willing to form coalitions with the right.
If politics were two dimensional, why wouldn’t you expect Condorcet cycles? Why would population get rid of them? If you have two candidates, a tie between them is on a razor’s edge. The larger the population of voters, the less likely. But if you have three candidates and three roughly equally common preferences, the cyclic shifts of A > B > C, then this is a robust tie. You only get a Condorcet winner when one of the factions becomes as big as the other two combined. Of course I have assumed away the other three preferences, but this is robust to them being small, not merely nonexistent.
I don’t know what happens in the following model: there are three issues A,B,C. Everyone, both voter and candidate, is for all of them, but in a zero-sum way, represented a vector a,b,c, with a+b+c = 11, a,b,c>=0. Start with the voters as above, at (10,1,0), (0,10,1), (1,0,10). Then the candidates (11,0,0), (0,11,0), (0,0,11) form a Condorcet cycle. By symmetry there is no Condorcet winner over all possible candidates. Randomly shift the proportion of voters. Is there a candidate that beats the three given candidates? One that beats all possible candidates? I doubt it. Add noise to make the individual voters unique. Now, I don’t know.
Hm, I remember Wikipedia talked about Hylland’s theroem that generalizes the Gibbard-Sattherwaite theorem to the probabilistic case, though Wikipedia might be wrong on that.
Copying my second response from the EA forum:
I think that there is some sense in which the character in your example would be right, since:
Arrow’s theorem doesn’t bind approval voting.
Generalizations of Arrow’s theorem don’t bind probabilistic results, e.g., each candidate is chosen with some probability corresponding to the amount of votes he gets.
Like, if you had someone saying there was “a deep core of electoral process” which means that as they scale to important decisions means that you will necessarily get “highly defective electoral processes”, as illustrated in the classic example of the “dangers of the first pass the post system”. Well in that case it would be reasonable to wonder whether the assumptions of the theorem bind, or whether there is some system like approval voting which is much less shitty than the theorem provers were expecting, because the assumptions don’t hold.
The analogy is imperfect, though, since approval voting is a known decent system, whereas for AI systems we don’t have an example friendly AI.
Sorry, this might have not been obvious, but I indeed think the voting impossibility theorems have holes in them because of the lotteries case and that’s specifically why I chose that example.
I think that intellectual point matters, but I also think writing a post with the title “There are no voting impossibility theorems”, defining “voting impossibility theorems” as “theorems that imply that all voting systems must make these known tradeoffs”, and then citing everyone who ever talked about “voting impossibility theorems” as having made “an error” would just be pretty unproductive. I would make a post like the ones that Scott Garrabrant made being like “I think voting impossibility theorems don’t account for these cases”, and that seems great, and I have been glad about contributions of this type.
Unfortunately, most democratic countries do use first past the post.
The 2 things that are inevitable is condorcet cycles and strategic voting (Though condorcet cycles are less of a problem as you scale up the population, and I have a sneaking suspicion that condorcet cycles go away if we allow a real numbered infinite amount of people.)
I think most democratic countries use proportional representation, not FTPT. But talking about “most” is an FTPT error. Enough countries use proportional representation that you can study the effect of voting systems. And the results are shocking to me. The theoretical predictions are completely wrong. Duverger’s law is false in every FTPT country except America. On the flip side, while PR does lead to more parties, they still form 1-dimensional spectrum. For example, a Green Party is usually a far-left party with slightly different preferences, instead of a single issue party that is willing to form coalitions with the right.
If politics were two dimensional, why wouldn’t you expect Condorcet cycles? Why would population get rid of them? If you have two candidates, a tie between them is on a razor’s edge. The larger the population of voters, the less likely. But if you have three candidates and three roughly equally common preferences, the cyclic shifts of A > B > C, then this is a robust tie. You only get a Condorcet winner when one of the factions becomes as big as the other two combined. Of course I have assumed away the other three preferences, but this is robust to them being small, not merely nonexistent.
I don’t know what happens in the following model: there are three issues A,B,C. Everyone, both voter and candidate, is for all of them, but in a zero-sum way, represented a vector a,b,c, with a+b+c = 11, a,b,c>=0. Start with the voters as above, at (10,1,0), (0,10,1), (1,0,10). Then the candidates (11,0,0), (0,11,0), (0,0,11) form a Condorcet cycle. By symmetry there is no Condorcet winner over all possible candidates. Randomly shift the proportion of voters. Is there a candidate that beats the three given candidates? One that beats all possible candidates? I doubt it. Add noise to make the individual voters unique. Now, I don’t know.
You don’t have strategic voting with probabilistic results. And the degree of strategic voting can also be mitigated.
Hm, I remember Wikipedia talked about Hylland’s theroem that generalizes the Gibbard-Sattherwaite theorem to the probabilistic case, though Wikipedia might be wrong on that.