Quadratic voting is a proposal for a voting system that ensures participants cast a number of votes proportional to the degree they care about the issue by making the marginal cost of each additional vote linearly increasing—see this post by Vitalik for an excellent introduction.
One major issue with QV is collusion—since the marginal cost of buying one vote is different for different people, if you could spread a number of votes out across multiple people, you could buy more votes for the same amount of money. For instance, suppose you and a friend have $100 each, and you care only about Cause A and they care only about Cause B, and neither of you care about any of the other causes up for vote. You could spend all of your $100 on A and they could spend all of theirs on B, or you could both agree to each spend $50 on A and $50 on B, which would net √2 times the votes for both A and B as opposed to the default.
The solution generally proposed in response to this issue is to ensure that the vote is truly secret, to the extent that you cannot even prove to anyone else who you voted for. The thinking is that this creates a prisoner’s dilemma where by defecting, you manage to obtain both the $50 from your friend and also the full $100 from yourself for your own cause, and that because there is no way to confirm how you voted, there are no possible externalities to create incentives for not defecting.
Unfortunately, I have two objections to this solution, one theoretical and one practical. The theoretical objection is that if the two agents are able to accurately predict each others’ actions and reason using FDT, then it is possible for the two agents to cooperate à la FairBot—this circumvents the inability to prove what you voted for after the fact by proving ahead of time what you will vote. The practical objection is that people tend to cooperate in prisoner’s dilemmas a significant amount of the time anyways, and in general a lot of people tend to uphold promises they make even if the other party has no way to verify. I think there’s even an argument to be made that the latter is at least partially a real world instance of the former.
I’m still optimistic that this might not be a showstopping problem in practice, since there is a limited pool of people who you would trust in practice, which puts a ceiling on how many times your vote can count. However, I think this is still a major unavoidable flaw with QV for both practical and theoretical applications.
Quadratic Voting and Collusion
Quadratic voting is a proposal for a voting system that ensures participants cast a number of votes proportional to the degree they care about the issue by making the marginal cost of each additional vote linearly increasing—see this post by Vitalik for an excellent introduction.
One major issue with QV is collusion—since the marginal cost of buying one vote is different for different people, if you could spread a number of votes out across multiple people, you could buy more votes for the same amount of money. For instance, suppose you and a friend have $100 each, and you care only about Cause A and they care only about Cause B, and neither of you care about any of the other causes up for vote. You could spend all of your $100 on A and they could spend all of theirs on B, or you could both agree to each spend $50 on A and $50 on B, which would net √2 times the votes for both A and B as opposed to the default.
The solution generally proposed in response to this issue is to ensure that the vote is truly secret, to the extent that you cannot even prove to anyone else who you voted for. The thinking is that this creates a prisoner’s dilemma where by defecting, you manage to obtain both the $50 from your friend and also the full $100 from yourself for your own cause, and that because there is no way to confirm how you voted, there are no possible externalities to create incentives for not defecting.
Unfortunately, I have two objections to this solution, one theoretical and one practical. The theoretical objection is that if the two agents are able to accurately predict each others’ actions and reason using FDT, then it is possible for the two agents to cooperate à la FairBot—this circumvents the inability to prove what you voted for after the fact by proving ahead of time what you will vote. The practical objection is that people tend to cooperate in prisoner’s dilemmas a significant amount of the time anyways, and in general a lot of people tend to uphold promises they make even if the other party has no way to verify. I think there’s even an argument to be made that the latter is at least partially a real world instance of the former.
I’m still optimistic that this might not be a showstopping problem in practice, since there is a limited pool of people who you would trust in practice, which puts a ceiling on how many times your vote can count. However, I think this is still a major unavoidable flaw with QV for both practical and theoretical applications.