Communicate your models of AI risk to policymakers
Help policymakers understand emergency scenarios (especially misalignment scenarios) and how to prepare for them
Use your lobbying/policy teams primarily to raise awareness about AGI and help policymakers prepare for potential AGI-related global security risks.
Develop simple/clear frameworks that describe which dangerous capabilities you are tracking (I think OpenAI’s preparedness framework is a good example, particularly RE simplicity/clarity/readability.)
Advocate for increased transparency into frontier AI development through measures like stronger reporting requirements, whistleblower mechanisms, embedded auditors/resident inspectors, etc.
Publicly discuss threat models (kudos to DeepMind)
Engage in public discussions/debates with people like Hinton, Bengio, Hendrycks, Kokotajlo, etc.
Encourage employees to engage in such discussions/debates, share their threat models, etc.
Make capability forecasts public (predictions for when models would have XYZ capabilities)
Communicate under what circumstances you think major government involvement would be necessary (e.g., nationalization, “CERN for AI” setups).
I mostly agree on current margins — the more I mistrust a lab, the more I like transparency.
I observe that unilateral transparency is unnecessary on the view-from-the-inside if you know you’re a reliably responsible lab. And some forms of transparency are costly. So the more responsible a lab seems, the more sympathetic we should be to it saying “we thought really hard and decided more unilateral transparency wouldn’t be optimific.” (For some forms of transparency.)
Some ideas relating to comms/policy:
Communicate your models of AI risk to policymakers
Help policymakers understand emergency scenarios (especially misalignment scenarios) and how to prepare for them
Use your lobbying/policy teams primarily to raise awareness about AGI and help policymakers prepare for potential AGI-related global security risks.
Develop simple/clear frameworks that describe which dangerous capabilities you are tracking (I think OpenAI’s preparedness framework is a good example, particularly RE simplicity/clarity/readability.)
Advocate for increased transparency into frontier AI development through measures like stronger reporting requirements, whistleblower mechanisms, embedded auditors/resident inspectors, etc.
Publicly discuss threat models (kudos to DeepMind)
Engage in public discussions/debates with people like Hinton, Bengio, Hendrycks, Kokotajlo, etc.
Encourage employees to engage in such discussions/debates, share their threat models, etc.
Make capability forecasts public (predictions for when models would have XYZ capabilities)
Communicate under what circumstances you think major government involvement would be necessary (e.g., nationalization, “CERN for AI” setups).
I mostly agree on current margins — the more I mistrust a lab, the more I like transparency.
I observe that unilateral transparency is unnecessary on the view-from-the-inside if you know you’re a reliably responsible lab. And some forms of transparency are costly. So the more responsible a lab seems, the more sympathetic we should be to it saying “we thought really hard and decided more unilateral transparency wouldn’t be optimific.” (For some forms of transparency.)