Unfortunately I don’t have well-formed thoughts on this topic. I wonder if there are people who specialize in AI lab governance and have written about this, but I’m not personally aware of such writings. To brainstorm some ideas:
Conduct and publish anonymous surveys of employee attitudes about safety.
Encourage executives, employees, board members, advisors, etc., to regularly blog about governance and safety culture, including disagreements over important policies.
Officially encourage (e.g. via financial rewards) internal and external whistleblowers. Establish and publish policies about this.
Publicly make safety commitments and regularly report on their status, such as how much compute and other resources have been allocated/used by which safety teams.
Make/publish a commitment to publicly report negative safety news, which can be used as basis for whistleblowing if needed (i.e. if some manager decides to hide such news instead).
Unfortunately I don’t have well-formed thoughts on this topic. I wonder if there are people who specialize in AI lab governance and have written about this, but I’m not personally aware of such writings. To brainstorm some ideas:
Conduct and publish anonymous surveys of employee attitudes about safety.
Encourage executives, employees, board members, advisors, etc., to regularly blog about governance and safety culture, including disagreements over important policies.
Officially encourage (e.g. via financial rewards) internal and external whistleblowers. Establish and publish policies about this.
Publicly make safety commitments and regularly report on their status, such as how much compute and other resources have been allocated/used by which safety teams.
Make/publish a commitment to publicly report negative safety news, which can be used as basis for whistleblowing if needed (i.e. if some manager decides to hide such news instead).
Publish important governance documents. (Seemed too basic to mention, but apparently not.)