I am sorry that I took such a long time replying to this. First, thank you for your comment, as it answers all of my questions in a fairly detailed manner.
The impact of a map of research that includes the labs, people, organizations, and research papers focused on AI Safety seems high, and FLI’s 2017 map seems like a good start at least for what types of research is occurring in AI Safety. In this vein, it is worth noting that Superlinear is offering a small prize of $1150 for whoever can “Create a visual map of the AGI safety ecosystem”, but I don’t think this is enough to incentivize the creation of the resource that is currently missing from this community. I don’t think there is a great answer to “What is the most comprehensive repository of resources on the work being done in AI Safety?”. Maybe I will try to make a GitHub repository with orgs., people, and labs using FLI’s map as an initial blueprint. Would you be interested in reviewing this?
I don’t think there is a great answer to “What is the most comprehensive repository of resources on the work being done in AI Safety?”
There is no great answer, but I am compelled to list some of the few I know of (that I wanted to update my Resources post with) :
Vael Gates’s transcripts, which attempts to cover multiple views but, by the nature of conversations, aren’t very legible;
The Stampy project to build a comprehensive AGI safety FAQ, and to go beyond questions only, they do need motivated people;
Issa Rice’s AI Watch, which is definitely stuck in a corner of the Internet, if I didn’t work with Issa I would never have discovered it, lots of data about orgs, people and labs, not much context.
Other mapping resources involve not the work being done but arguments and scenarios, as an example there’s Lukas Trötzmüller’s excellent argument compilation, but that wouldn’t exactly help someone get into the field faster.
Just in case you don’t know about it there’s the AI alignment field-building tag on LW, which mentions an initiative run by plex, who also coordinates Stampy.
I’d be interested in reviewing stuff, yes, time permitting!
I am sorry that I took such a long time replying to this. First, thank you for your comment, as it answers all of my questions in a fairly detailed manner.
The impact of a map of research that includes the labs, people, organizations, and research papers focused on AI Safety seems high, and FLI’s 2017 map seems like a good start at least for what types of research is occurring in AI Safety. In this vein, it is worth noting that Superlinear is offering a small prize of $1150 for whoever can “Create a visual map of the AGI safety ecosystem”, but I don’t think this is enough to incentivize the creation of the resource that is currently missing from this community. I don’t think there is a great answer to “What is the most comprehensive repository of resources on the work being done in AI Safety?”. Maybe I will try to make a GitHub repository with orgs., people, and labs using FLI’s map as an initial blueprint. Would you be interested in reviewing this?
No need to apologize, I’m usually late as well!
There is no great answer, but I am compelled to list some of the few I know of (that I wanted to update my Resources post with) :
Vael Gates’s transcripts, which attempts to cover multiple views but, by the nature of conversations, aren’t very legible;
The Stampy project to build a comprehensive AGI safety FAQ, and to go beyond questions only, they do need motivated people;
Issa Rice’s AI Watch, which is definitely stuck in a corner of the Internet, if I didn’t work with Issa I would never have discovered it, lots of data about orgs, people and labs, not much context.
Other mapping resources involve not the work being done but arguments and scenarios, as an example there’s Lukas Trötzmüller’s excellent argument compilation, but that wouldn’t exactly help someone get into the field faster.
Just in case you don’t know about it there’s the AI alignment field-building tag on LW, which mentions an initiative run by plex, who also coordinates Stampy.
I’d be interested in reviewing stuff, yes, time permitting!