A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI) - AI systems that achieve or exceed human performance across a wide range of cognitive tasks. In pursuing this goal, they may develop and deploy AI systems that pose particularly significant risks. While they have already taken some measures to mitigate these risks, best practices have not yet emerged. To support the identification of best practices, we sent a survey to 92 leading experts from AGI labs, academia, and civil society and received 51 responses. Participants were asked how much they agreed with 50 statements about what AGI labs should do. Our main finding is that participants, on average, agreed with all of them. Many statements received extremely high levels of agreement. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. Ultimately, our list of statements may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs.
I’m really excited about this paper. It seems to be great progress toward figuring out what labs should do and making that common knowledge.
(And apparently safety evals and pre-deployment auditing are really popular, hooray!)
Edit: see also the blogpost.
I think this is now the canonical collection of ideas for stuff AI labs should do (from an x-risk perspective). (Here are the 50 ideas with brief descriptions, followed by 50 ideas suggested by respondents—also copied in a comment below.) (The only other relevant collection-y public source I’m aware of is my Ideas for AI labs: Reading list.) So it seems worth commenting with promising ideas not listed in the paper:
Alignment (including interpretability) research (as a common good, separate from aligning your own models)
Model-sharing: cautiously sharing powerful models with some external safety researchers to advance their research (separately from sharing for the sake of red-teaming)
Transparency stuff
Coordination stuff
Publication stuff
Planning stuff
[Supporting other labs doing good stuff]
[Supporting other kinds of actors (e.g., government) doing good stuff]
(I hope to update this comment with details later.)
Huh, interesting. Seems good to get an HTML version then, since in my experience PDFs have a pretty sharp dropoff in readership.
When I google the title of the paper literally the only hit is this LessWrong post. Do you know where the paper was posted and whether there exists an HTML version (or a LaTeX, or a Word, or a Google Doc version)?
It was posted at https://arxiv.org/abs/2305.07153. I’m not aware of versions other than the pdf.
Thanks for the nudge: We’ll consider producing an HTML version!
For reference, here are the 50 tested ideas (in descending order of popularity):
And here are 50 ideas suggested by respondents:
Interesting how many of these are “democracy / citizenry-involvement” oriented. Strongly agree with 18 (whistleblower protection) and 38 (simulate cyber attacks).
20 (good internal culture), 27 (technical AI people on boards) and 29 (three lines of defense) sound good to me, I’m excited about 31 if mandatory interpretability standards exist.
42 (on sentience) seems pretty important but I don’t know what it would mean.
This is super late, but I recently posted: Improving the Welfare of AIs: A Nearcasted Proposal
Assuming you mean the second 42 (“AGI labs take measures to limit potential harms that could arise from AI systems being sentient or deserving moral patienthood”)-- I also don’t know what labs should do, so I asked an expert yesterday and will reply here if they know of good proposals...
Thanks!
The top 6 of the ones in the paper (the ones I think got >90% somewhat or strongly agree, listed below), seem pretty similar to me—are there important reasons people might support one over another?
Pre-deployment risk assessments
Evaluations of dangerous capabilities
Third-party model audits
Red teaming
Pre-training risk assessments
Pausing training of dangerous models
I think 19 ideas got >90% agreement.
I agree the top ideas overlap. I think reasons one might support some over others depend on the details.