One of the common arguments in favor of investing more resources into current governance approaches (e.g., evals, if-then plans, RSPs) is that there’s nothing else we can do.There’s not a better alternative– these are the only things that labs and governments are currently willing to support.
The Compendium argues that there are other (valuable) things that people can do, with most of these actions focusing on communicating about AGI risks. Examples:
Share a link to this Compendium online or with friends, and provide your feedback on which ideas are correct and which are unconvincing. This is a living document, and your suggestions will shape our arguments.
Post your views on AGI risk to social media, explaining why you believe it to be a legitimate problem (or not).
Red-team companies’ plans to deal with AI risk, and call them out publicly if they do not have a legible plan.
One possible critique is that their suggestions are not particularly ambitious. This is likely because they’re writing for a broader audience (people who haven’t been deeply engaged in AI safety).
For people who have been deeply engaged in AI safety, I think the natural steelman here is “focus on helping the public/government better understand the AI risk situation.”
There are at least some impactful and high-status examples of this (e.g., Hinton, Bengio, Hendrycks). I think in the last few years, for instance, most people would agree that Hinton/Bengio/Hendrycks have had far more impact in their communications/outreach/policy work than their technical research work.
And it’s not just the famous people– I can think of ~10 junior or mid-career people who left technical research in the last year to help policymakers better understand AI progress and AI risk, and I think their work is likely far more impactful than if they had stayed in technical research. (And I’m even excluding people who are working on evals/if-then plans: like, I’m focusing on people who see their primary purpose as helping the public or policymakers develop “situational awareness”, develop stronger models of AI progress and AI risk, understand the conceptual arguments for misalignment risk, etc.)
One of the common arguments in favor of investing more resources into current governance approaches (e.g., evals, if-then plans, RSPs) is that there’s nothing else we can do. There’s not a better alternative– these are the only things that labs and governments are currently willing to support.
The Compendium argues that there are other (valuable) things that people can do, with most of these actions focusing on communicating about AGI risks. Examples:
One possible critique is that their suggestions are not particularly ambitious. This is likely because they’re writing for a broader audience (people who haven’t been deeply engaged in AI safety).
For people who have been deeply engaged in AI safety, I think the natural steelman here is “focus on helping the public/government better understand the AI risk situation.”
There are at least some impactful and high-status examples of this (e.g., Hinton, Bengio, Hendrycks). I think in the last few years, for instance, most people would agree that Hinton/Bengio/Hendrycks have had far more impact in their communications/outreach/policy work than their technical research work.
And it’s not just the famous people– I can think of ~10 junior or mid-career people who left technical research in the last year to help policymakers better understand AI progress and AI risk, and I think their work is likely far more impactful than if they had stayed in technical research. (And I’m even excluding people who are working on evals/if-then plans: like, I’m focusing on people who see their primary purpose as helping the public or policymakers develop “situational awareness”, develop stronger models of AI progress and AI risk, understand the conceptual arguments for misalignment risk, etc.)