As of the time of writing this comment, there’ve been 82 reviews on the 75 qualified (i.e., twice-nominated) posts by 32 different reviewers. 24 reviews were by 18 different authors on their own posts.
Whether this counts as a shortage, is puzzling, or is concerning is a harder question to answer.
My quick thoughts:
Personally, I was significantly surprised by the level of contribution to the 2018 Review. It’s really hard to get people to do things (especially thing that are New and Work) and I wouldn’t have been puzzled at all if the actual numbers had been 20% of what they actually are. Even the more optimistic LW team members had planned for a world where the team hunkered down and wrote all the reviews ourselves.
If we consider the relevant population of of potential reviewers to be the same as those eligible to vote, i.e., users with 1000+ karma, then there are ~130 [1] such users who view at least one post on the site each week (~150 at the monthly timescale). That gives us 20-25% of active eligible voters writing reviews.
If you look at all users above 100 karma, the number is 8-10% of candidate reviewer engaging in the Review. People below 100 karma won’t have written many comments and/or probably haven’t been around for that long so aren’t likely candidates.
Relative to the people who could reasonably be expected to review, I think we’re doing decently, if something like 10-20% of people who could do something are doing it. Of course, there’s another question of why there aren’t more people with 100+ or 1000+ karma around to begin with, but it’s probably not to do with the incentives or mechanics of the review.
[1] For reference, there are 430 users in the LessWrong database with more than 1000 karma.
Those numbers look pretty good in percentage terms. I hadn’t thought about it from that angle and I’m surprised they’re that high.
FWIW, my original perception that there was a shortage was based on the ratio between the quantity of reviews and the quantity of new posts that have been written since the start of the review period. In theory, the latter takes a lot more effort than the former, so it would be unexpected if more people do the higher effort thing automatically and less people do the lower effort thing despite explicit calls to action and $2000 in prize money.
Re: the ratio The ratio isn’t obviously bad to me, depending on your expectation? Between the beginning of the review on Dec 8th and Jan 3rd [1] then there’s been 199 posts (excluding question posts but not excluding link posts), but of those:
− 149 post written by 66 users with over 100 karma
- 95 written by 33 users above 1000 karma (the most relevant comparison)
- 151 posts written by 75 people whose account was first active before 2019.
Compare those with the 82 reviews by 32 reviewers, it’s a ratio of reviews:posts between 1:1 and 1:2. I’m curious if you’d been expecting something much different. [ETA: because of the incomplete data you might want to say 120 posts vs 82 reviews which is 1:1.5.]
Re: the effort It’s not clear to me that the effort involved means you should expect more reviews: 1) I think the Cost-Benefit Ratio for posts is higher even if they take longer, 2) reviewing a post only happens if you’ve read the post and it impacted you enough to remember and feel motivated to say stuff about, 3) when I write posts, it’s about something I’ve been thinking about and am excited about; I haven’t developed any habit around being excited about reviews since I’m not used to it.
[1] That’s when I last pulled that particular data onto my machine and I’m being a bit lazy because 8 more days it isn’t going to change the overall picture; though it means the relative numbers are a bit worse for reviews.
Okay, so 80% of the reviewers have > 1000 karma. 90% >= 463; which means I think the “20-25% of eligible review voters are writing reviews” number is correct if this methodology actually makes sense.
Raw numbers to go with Bendini’s comment:
As of the time of writing this comment, there’ve been 82 reviews on the 75 qualified (i.e., twice-nominated) posts by 32 different reviewers. 24 reviews were by 18 different authors on their own posts.
Whether this counts as a shortage, is puzzling, or is concerning is a harder question to answer.
My quick thoughts:
Personally, I was significantly surprised by the level of contribution to the 2018 Review. It’s really hard to get people to do things (especially thing that are New and Work) and I wouldn’t have been puzzled at all if the actual numbers had been 20% of what they actually are. Even the more optimistic LW team members had planned for a world where the team hunkered down and wrote all the reviews ourselves.
If we consider the relevant population of of potential reviewers to be the same as those eligible to vote, i.e., users with 1000+ karma, then there are ~130 [1] such users who view at least one post on the site each week (~150 at the monthly timescale). That gives us 20-25% of active eligible voters writing reviews.
If you look at all users above 100 karma, the number is 8-10% of candidate reviewer engaging in the Review. People below 100 karma won’t have written many comments and/or probably haven’t been around for that long so aren’t likely candidates.
Relative to the people who could reasonably be expected to review, I think we’re doing decently, if something like 10-20% of people who could do something are doing it. Of course, there’s another question of why there aren’t more people with 100+ or 1000+ karma around to begin with, but it’s probably not to do with the incentives or mechanics of the review.
[1] For reference, there are 430 users in the LessWrong database with more than 1000 karma.
Those numbers look pretty good in percentage terms. I hadn’t thought about it from that angle and I’m surprised they’re that high.
FWIW, my original perception that there was a shortage was based on the ratio between the quantity of reviews and the quantity of new posts that have been written since the start of the review period. In theory, the latter takes a lot more effort than the former, so it would be unexpected if more people do the higher effort thing automatically and less people do the lower effort thing despite explicit calls to action and $2000 in prize money.
Re: the ratio
The ratio isn’t obviously bad to me, depending on your expectation? Between the beginning of the review on Dec 8th and Jan 3rd [1] then there’s been 199 posts (excluding question posts but not excluding link posts), but of those:
− 149 post written by 66 users with over 100 karma
- 95 written by 33 users above 1000 karma (the most relevant comparison)
- 151 posts written by 75 people whose account was first active before 2019.
Compare those with the 82 reviews by 32 reviewers, it’s a ratio of reviews:posts between 1:1 and 1:2.
I’m curious if you’d been expecting something much different. [ETA: because of the incomplete data you might want to say 120 posts vs 82 reviews which is 1:1.5.]
Re: the effort
It’s not clear to me that the effort involved means you should expect more reviews: 1) I think the Cost-Benefit Ratio for posts is higher even if they take longer, 2) reviewing a post only happens if you’ve read the post and it impacted you enough to remember and feel motivated to say stuff about, 3) when I write posts, it’s about something I’ve been thinking about and am excited about; I haven’t developed any habit around being excited about reviews since I’m not used to it.
[1] That’s when I last pulled that particular data onto my machine and I’m being a bit lazy because 8 more days it isn’t going to change the overall picture; though it means the relative numbers are a bit worse for reviews.
Okay, so 80% of the reviewers have > 1000 karma. 90% >= 463; which means I think the “20-25% of eligible review voters are writing reviews” number is correct if this methodology actually makes sense.