Related to the previous summary, we also have a database of a bunch of papers on transformative AI safety, that has attempted to have comprehensive coverage of papers motivated by safety at organization with a significant safety focus within the years 2016-20, but also includes other stuff such as blog posts, content from earlier years, etc. There’s a bunch of analysis as well that I won’t go into.
Planned opinion:
I like this project and analysis—it’s a different view on the landscape of technical AI safety than I usually get to see. I especially recommend reading it if you want to get a sense of the people and organizations comprising the technical AI safety field; I’m not going into detail here because I mostly try to focus on the object level issues in this newsletter.
Planned summary for the Alignment Newsletter:
Planned opinion:
Looks fine, thanks.