As far as I know, there is no good one, and this is a moderately-sized oversight by the rationality/EA community. In particular, there is no census of the number of people working on each AI alignment agenda. I want to create one as a side project, but I haven’t had time. You might find the following partial data useful:
The 2021 AI Alignment Literature Review and Charity Comparison is the last overview of all active AI alignment organizations. Note that this excludes independent researchers like John Wentworth and Vanessa Kosoy, and does not have data on the size of each organization.
The 2019 Leaders Forum is the last instance when many EA organizations’ beliefs about talent needs were aggregated
The 2020 EA Survey is the latest data on what causes EAs think are important
As far as I know, there’s nothing like this for the rationality community.
Also, the State of AI Report 2021 has a graph of the number of people working on long-term AI alignment research at various organizations (this graph is from slide 157):
As far as I know, there is no good one, and this is a moderately-sized oversight by the rationality/EA community. In particular, there is no census of the number of people working on each AI alignment agenda. I want to create one as a side project, but I haven’t had time. You might find the following partial data useful:
The 2021 AI Alignment Literature Review and Charity Comparison is the last overview of all active AI alignment organizations. Note that this excludes independent researchers like John Wentworth and Vanessa Kosoy, and does not have data on the size of each organization.
The 2019 Leaders Forum is the last instance when many EA organizations’ beliefs about talent needs were aggregated
The 2020 EA Survey is the latest data on what causes EAs think are important
As far as I know, there’s nothing like this for the rationality community.
Also, the State of AI Report 2021 has a graph of the number of people working on long-term AI alignment research at various organizations (this graph is from slide 157):
Thanks Thomas! I really appreciate this!