Lobby the Government for security regulations for AI
Just like there are security regulations on bioengineering, chemicals, nuclear power, weapons, etc. - there could be regulations on AI, with official auditing of risks, etc. This would create more demand for officially recognized “AI Risk” experts; will force projects to pay more attention to those issues (even if it’s only for coming up with rationalizations for why their project is safe), etc.
This doesn’t have to mean banning “unsafe” research; the existence of a “safe AI” certification means it might become a prerequisite for certain grants, or a marketing argument (even if the security standards for “safe AI” are not sufficient to actually guarantee safety).
Lobby the Government for security regulations for AI
Just like there are security regulations on bioengineering, chemicals, nuclear power, weapons, etc. - there could be regulations on AI, with official auditing of risks, etc. This would create more demand for officially recognized “AI Risk” experts; will force projects to pay more attention to those issues (even if it’s only for coming up with rationalizations for why their project is safe), etc.
This doesn’t have to mean banning “unsafe” research; the existence of a “safe AI” certification means it might become a prerequisite for certain grants, or a marketing argument (even if the security standards for “safe AI” are not sufficient to actually guarantee safety).