I don’t think there is a great answer to “What is the most comprehensive repository of resources on the work being done in AI Safety?”
There is no great answer, but I am compelled to list some of the few I know of (that I wanted to update my Resources post with) :
Vael Gates’s transcripts, which attempts to cover multiple views but, by the nature of conversations, aren’t very legible;
The Stampy project to build a comprehensive AGI safety FAQ, and to go beyond questions only, they do need motivated people;
Issa Rice’s AI Watch, which is definitely stuck in a corner of the Internet, if I didn’t work with Issa I would never have discovered it, lots of data about orgs, people and labs, not much context.
Other mapping resources involve not the work being done but arguments and scenarios, as an example there’s Lukas Trötzmüller’s excellent argument compilation, but that wouldn’t exactly help someone get into the field faster.
Just in case you don’t know about it there’s the AI alignment field-building tag on LW, which mentions an initiative run by plex, who also coordinates Stampy.
I’d be interested in reviewing stuff, yes, time permitting!
No need to apologize, I’m usually late as well!
There is no great answer, but I am compelled to list some of the few I know of (that I wanted to update my Resources post with) :
Vael Gates’s transcripts, which attempts to cover multiple views but, by the nature of conversations, aren’t very legible;
The Stampy project to build a comprehensive AGI safety FAQ, and to go beyond questions only, they do need motivated people;
Issa Rice’s AI Watch, which is definitely stuck in a corner of the Internet, if I didn’t work with Issa I would never have discovered it, lots of data about orgs, people and labs, not much context.
Other mapping resources involve not the work being done but arguments and scenarios, as an example there’s Lukas Trötzmüller’s excellent argument compilation, but that wouldn’t exactly help someone get into the field faster.
Just in case you don’t know about it there’s the AI alignment field-building tag on LW, which mentions an initiative run by plex, who also coordinates Stampy.
I’d be interested in reviewing stuff, yes, time permitting!