<@The tradition continues@>(@2019 AI Alignment Literature Review and Charity Comparison@)! I’ll say nearly the same thing as I did last year:
This mammoth post goes through the work done within AI alignment from December 2019 - November 2020, from the perspective of someone trying to decide which of several AI alignment organizations to donate to. As part of this endeavor, Larks summarizes several papers that were published at various organizations, and compares them to their budget and room for more funding.
Planned opinion:
I look forward to this post every year. It continues to be a stark demonstration of how much work _doesn’t_ get covered in this newsletter—while I tend to focus on the technical alignment problem, with some focus on AI governance and AI capabilities, this literature review spans many organizations working on existential risk, and as such has many papers that were never covered in this newsletter. Anyone who wants to donate to an organization working on AI alignment and/or x-risk should read this post.
Planned summary for the Alignment Newsletter:
Planned opinion: