Quadratic voting for the 2018 Review

LessWrong is currently reviewing the posts from 2018, and I’m trying to figure out how voting should happen. The new hotness that all your friends are talking about is quadratic voting, and after thinking about it for a few hours, it seems like a pretty good solution to me.

I’m writing this post primarily for people who know more about this stuff to show me where the plan will fail terribly for LW, to suggest UI improvements, or to suggest an alternative plan. If nothing serious is raised that changes my mind in the next 7 days, we’ll build a straightforward UI and do it.

I’ve not read anything about it, so briefly, what is quadratic voting?

I’m just picking it up, so I’ll write a short explanation it as I understand it, and will update this if it’s confused/​mistaken.

As I understand it, the key insight behind quadratic voting is that everyone can cast multiple votes, but that the marginal cost of votes is increasing, rather than staying constant.

With other voting mechanisms where cost stays constant, people are always incentivised to vote repeatedly for their favourite option. This is similar to how, under most scoring rules, people bet all their chips on the outcome they think is most likely, rather than spread their money directly proportional to their actual probability mass. You have to think carefully to design a proper scoring rule to incentivise users to write down their full epistemic state.

With quadratic voting, instead of spending all your votes on your favourite option, the more you spend on an option the more it costs to spend more on that option, and other options start looking more worthwhile. Your first vote costs 1, your second vote costs 2, your nth votes costs n. And you have a limited amount of cost you can pay, so you start having to make new comparisons about where the marginal cost should be spent.

Concretely, there are two things for a person voting in this system to keep a track of:

  • Votes is the total votes you make.

  • Cost is the total cost you pay for your votes

If you votes are

  • Post A: 5 votes

  • Post B: 8 votes

  • Post C: 1 vote

Then your numbers are:

  • Votes is 5 + 8 + 1 = 14

  • Cost is

This is a system that not only invites voters to give a rank ordering, but to give your price for marginal votes on different posts. This information makes it much more easy to combine everyone’s preferences (though how much you care about everyone’s utilities is a free variable in the system. Democracies weight everyone equally, and we can consider alternatives like karma-weighting on LW).

It’s called quadratic voting because the sum of all numbers up to n is like half of . I think (but don’t know) that it doesn’t really matter how much each vote costs, as long as it’s increasing, because then it causes you to calculate your price for marginal votes on different options. “Price Voting” might have been a good name for it.

Other explanations I’ve seen frame it that the marginal vote gets counted as less, rather than it costing more, but I’m pretty sure this has an identical effect.

Vitalik Buterin has written more about it here.

How would this work in the LessWrong 2018 Review?

So, we’re trying to figure out what were the best posts from 2018, in an effort to build common knowledge of what the progress we made was, and also reward thinkers for having good ideas. We’ll turn the output into a sequence and, with a bit of editing, I’ll also make it into a physical book.

(Note: Content will only be published with author’s explicit consent, obviously.)

To do this vote, I’d suggest the following setup:

  • All users over a certain karma threshold (probably 1000 karma, which is ~500 users) are given an input page where they can vote on all posts that were nominated that year.

  • Their total Cost users can spend is set to 500.

  • We will keep this open for users for 2 weeks, during which time they can cast their votes. You can save your votes and also come back to edit them any time in the two weeks.

  • While the votes are being cast, there will be a ‘snapshot’ published 1 week in, showing what the final result would be if the vote were taken that day. This will help users understand what output the votes are connected to, and help to figure out what posts you want to write reviews for (i.e. writing things to help other users understand why a post is being undervalued/​overvalued according to you).

  • At the end, I’ll publish a few versions of aggregating the votes, such as karma-weighted, not-karma-weighted, and maybe some other ways of combining rank orderings. We’ll do something like select the top N posts where the word-count sums to ~100k words (aka a 350 page book) to be in the final sequence. I expect this will be around 25-30 posts.

Some objections

What if I have reason to think a post is terrible?

Negative voting is also allowed, where a negative votes costs the same as a positive vote i.e. 4 votes costs the same as −4 votes. I think this probably deals with such cases fine.

What about AI alignment writing, or other writing that I feel I cannot evaluate?

So let me first deal with the most obvious case here, which is AI alignment writing. I think these posts are important research from the perspective of the long-term future of civilization, but also they aren’t of direct interest to most users of LessWrong, and more importantly many users can’t personally tell which posts are good or bad.

For one, I think that alignment ideas are very important and I plan to personally work on it in a focused way, and so I’m not too worried about it not all getting recognised in this review process. I’d ideally like to make books of the Embedded Agency, Iterated Amplification, and Value Learning sequences, for example, so I’m not too bothered if the best ideas from each of those aren’t in the LessWrong 2018 book.

For two, I think that I think there is a deep connection between AI alignment and rationality, and that much key stuff for thinking about both (bayes, information theory, embedded agency, etc) has been very useful for me in thinking about both AI alignment and personal decision making. I think that some of the best alignment content consists of deep insights that many rationalists will find useful, so I do think some of it will pass this bar.

For three, I still think I trust users to have a good sense of what content is valuable. I think I have a sense of what alignment content is useful in part because I trust other users in a bunch of ways (Paul Christiano, Wei Dai, Abram Demski, etc). There’s no rule saying you can’t update on the ideas and judgement of people you trust.

Overall I trust users and their judgments quite a bit, and if users feel they can’t tell if a post is good, then I think I want to trust that judgment.

Is this too much cognitive overhead for LW users?

I am open to ideas for more efficient voting systems.

A key thing about spreading it over two weeks, means that users don’t have to do it in a single sitting. Personally, I feel like I can cast my quadratic votes on 75 posts in about 20 mins, and then will want to come back to it a few days later to see if it still feels right. However Ray found it took him more like 2 hours of heavy work, and feels like the system will be too complicated for most people to get a handle on.

I think the minimum effort is fairly low, so I expect most users to have a fine time, but I’m happy to receive public and private feedback about this. In general I likely want to make a brief survey for afterwards, about how the whole process went.