A’ight, one final 2021 Review Roundup post – awarding prizes. I had a week to look over the results. The primary way I ranked posts was by a weighted score, which gave 1000+ karma users 3x the voting weight. Here was the distribution of votes:
I basically see two strong outlier posts at the top of the ranking, followed by a cluster of 6-7 posts, followed by a smooth tail of posts that were pretty good without any clear cutoff.
Post Prizes
Gold Prize Posts
Two posts stood noticeably out above all the others, which I’m awarding $800 to.
Ngo and Yudkowsky on alignment difficulty. This didn’t naturally cluster into the same group of vote-totals as the other silver-prizes, but it was in the top 10. I think the post was fairly hard to read, and didn’t have easily digestible takeaways, but nonetheless I think this kicked off some of the most important conversations in the AI Alignment space and warrants inclusion in this tier.
Bronze Prize Posts
Although there’s not a clear clustering after this point, when I eyeball how important the next several posts were, it seems to me appropriate to give $400 to each of:
This final group has the most arbitrary cutoff at all, and includes some judgment calls about how many medium or strong votes it had, among 1000+ karma users, and in some edge cases my own subjective guess of how important it was.
This actually fell under my cutoff for “number of medium+ votes”. I’ll be honest, I don’t even really understand Infra-Bayesianism. But every time someone attempts to explain it to me I feel like I get a taste of something that will one day be important. Giving it an honorable mention, but, take it with that grain of salt.
I… choose to wield my dictatorial power of the review process to refuse to give Elephant seal 2 a prize, even it landed a respectably high “rank 39” in the weighted vote totals. But, it sure does seem like it deserves an Honorable-ish mention anyway for all of people’s love for it. I also quite liked Coafos’ review of Elephant Seal 2, which was also the second-highest-karma-review. But which I also use my dictatorial powers to refuse to give a prize to.
Prizes for the 2021 Review
If you received a prize, please fill out your payment contact email and PayPal.
A’ight, one final 2021 Review Roundup post – awarding prizes. I had a week to look over the results. The primary way I ranked posts was by a weighted score, which gave 1000+ karma users 3x the voting weight. Here was the distribution of votes:
I basically see two strong outlier posts at the top of the ranking, followed by a cluster of 6-7 posts, followed by a smooth tail of posts that were pretty good without any clear cutoff.
Post Prizes
Gold Prize Posts
Two posts stood noticeably out above all the others, which I’m awarding $800 to.
Strong Evidence is Common by Mark Xu
“PR” is corrosive; “reputation” is not, by Anna Salamon.
I also particularly liked Akash’s review.
Silver Prize Posts
And the second (eyeballed) cluster of posts, each getting $600, is:
Your Cheerful Price, by Eliezer Yudkowsky.
This notably had the most reviews – a lot of people wanted to weigh in and say “this personally helped me”, often with some notes or nuance.
ARC’s first technical report: Eliciting Latent Knowledge by Paul Christiano, Ajeya Cotra and Mark Xu.
This Can’t Go On by Holden Karnofsky
Rationalism before the Sequences, by Eric S Raymond.
I liked this review by A Ray who noted one source of value here is the extensive bibliography.
Lies, Damn Lies, and Fabricated Options, by Duncan Sabien
Fun with +12 OOMs of Compute, by Daniel Kokotajlo.
Nostalgebraist’s review was particularly interesting.
What 2026 looks like by Daniel Kokotajlo
Ngo and Yudkowsky on alignment difficulty. This didn’t naturally cluster into the same group of vote-totals as the other silver-prizes, but it was in the top 10. I think the post was fairly hard to read, and didn’t have easily digestible takeaways, but nonetheless I think this kicked off some of the most important conversations in the AI Alignment space and warrants inclusion in this tier.
Bronze Prize Posts
Although there’s not a clear clustering after this point, when I eyeball how important the next several posts were, it seems to me appropriate to give $400 to each of:
How To Write Quickly While Maintaining Epistemic Rigor, by John Wentworth
Science in a High-Dimensional World by John Wentworth
How factories were made safe by Jason Crawford
Cryonics signup guide #1: Overview by Mingyuan
Making Vaccine by John Wentworth
Taboo “Outside View” by Daniel Kokotaljo
All Possible Views About Humanity’s Future Are Wild by Holden Karnofsky
Another (outer) alignment failure story by Paul Christiano
Split and Commit by Duncan Sabien
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) by Andrew Critch
There’s no such thing as a tree (phylogenetically), by eukaryote
The Plan by John Wentworth
Trapped Priors As A Basic Problem Of Rationality by Scott Alexander
Finite Factored Sets by Scott Garrabrant
Selection Theorems: A Program For Understanding Agents by John Wentworth
Slack Has Positive Externalities For Groups by John Wentworth
My research methodology by Paul Christiano
Honorable Mentions
This final group has the most arbitrary cutoff at all, and includes some judgment calls about how many medium or strong votes it had, among 1000+ karma users, and in some edge cases my own subjective guess of how important it was.
These authors each get $100 per post.
The Rationalists of the 1950s (and before) also called themselves “Rationalists” by Owain Evans
Ruling Out Everything Else by Duncan Sabien
Leaky Delegation: You are not a Commodity by Darmani
Feature Selection by Zack Davis
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions) by Duncan Sabien
larger language models may disappoint you [or, an eternally unfinished draft] by Nostalgebraist
Self-Integrity and the Drowning Child by Eliezer Yudkowsky
Comments on Carlsmith’s “Is power-seeking AI an existential risk?” by Nate Soares
Working With Monsters by John Wentworth
Simulacrum 3 As Stag-Hunt Strategy by John Wentworth
EfficientZero: How It Works by 1a3orn
Does Georgism Work? Part 1: Is Land Really A Big Deal by Lars Doucet (submitted to the Review process by Sune)
Catching the Spark by Logan Strohl
Specializing in Problems We Don’t Understand by John Wentworth
Shoulder Advisors 101 by Duncan Sabien
Notes from “Don’t Shoot the Dog” by Julia Wise
Why has nuclear power been a flop? by Jason Crawford
Whole Brain Emulation: No Progress on C. elgans After 10 Years by niconiconi
Frame Control by Aella
Worst-case thinking in AI alignment by Buck
Yudkowsky and Christiano discuss “Takeoff Speeds” by Eliezer Yudkowsky, Paul Christiano, and presumably some MIRI editors
You are probably underestimating how good self-love can be by charlie.rs
Infra-Bayesian physicalism: a formal theory of naturalized induction by Vanessa Kosoy
This actually fell under my cutoff for “number of medium+ votes”. I’ll be honest, I don’t even really understand Infra-Bayesianism. But every time someone attempts to explain it to me I feel like I get a taste of something that will one day be important. Giving it an honorable mention, but, take it with that grain of salt.
Jean Monnet: The Guerilla Bureaucrat by Martin Sustrik
Seven Years of Spaced Repetition Software in the Classroom by tanagrabeast
Coordination Schemes Are Capital Investments by Raemon
I don’t get a prize tho. :(
The Point of Trade by Eliezer Yudkowsky
Saving Time by Scott Garrabrant
What Do GDP Growth Curves Really Mean? by John Wentworth
Highlights from The Autobiography of Andrew Carnegie by Jason Crawford
Grokking the Intentional Stance by jbkjr
Honorable-est Mention
I… choose to wield my dictatorial power of the review process to refuse to give Elephant seal 2 a prize, even it landed a respectably high “rank 39” in the weighted vote totals. But, it sure does seem like it deserves an Honorable-ish mention anyway for all of people’s love for it. I also quite liked Coafos’ review of Elephant Seal 2, which was also the second-highest-karma-review. But which I also use my dictatorial powers to refuse to give a prize to.
Fight me.
Prize Totals
When you add all that up, here are the prize totals. Reminder, if you received a prize, please fill out your payment contact email and PayPal so we can pay you.
@johnswentworth $2800
@Daniel Kokotajlo $1600
@Duncan_Sabien $1300
@paulfchristiano $1100
@HoldenKarnofsky $1000 (but, he has declined to avoid any potential conflict of interest)
@Eliezer Yudkowsky $1200
@Mark Xu $1000
@AnnaSalamon $800
@Eric Raymond $600
@jasoncrawford $600
@Scott Garrabrant $500
@mingyuan $400
@Andrew_Critch $400
@eukaryote $400
@Scott Alexander $400
@Richard_Ngo $300
@Ajeya Cotra $200
@Owain_Evans $100
@Darmani $100
@Zack_M_Davis $100
@nostalgebraist $100
@So8res $100
@1a3orn $100
Lars Doucet $100
@LoganStrohl $100
@juliawise $100
@niconiconi $100
@Aella $100
@Buck $100
@charlie.rs $100
@Vanessa Kosoy $100
@Martin Sustrik $100
@tanagrabeast $100
@jbkjr $100