Safety consultations for AI lab employees

Many people who are concerned about AI x-risk work at AI labs, in the hope of doing directly useful work, boosting a relatively responsible lab, or causing their lab to be safer on the margin.

Labs do lots of stuff that affects AI safety one way or another. It would be hard enough to follow all this at best; in practice, labs are incentivized to be misleading in both their public and internal comms, making it even harder to follow what’s happening. And so people end up misinformed about what’s happening, often leading them to make suboptimal choices.

In my AI Lab Watch work, I pay attention to what AI labs do and what they should do. So I’m in a good position to inform interested but busy people.

So I’m announcing an experimental service where I provide the following:

  • Calls for current and prospective employees of frontier AI labs.

    • Book here

    • On these (confidential) calls, I can answer your questions about frontier AI labs’ current safety-relevant actions, policies, commitments, and statements, to help you to make more informed choices.

    • These calls are open to any employee of OpenAI, Anthropic, Google DeepMind, Microsoft AI, or Meta AI, or to anyone who is strongly considering working at one (with an offer in hand or expecting to receive one).

    • If that isn’t you, feel free to request a call and I may still take it.

  • Support for potential whistleblowers. If you’re at a lab and aware of wrongdoing, I can put you in touch with:

    • Former lab employees and others who can offer confidential advice

    • Vetted employment lawyers

    • Communications professionals who can advise on talking to the media.

    If you need this, email zacharysteinperlman at gmail or message me on Signal at 734 353 3975.

I don’t know whether I’ll offer this long-term. I’m going to offer this for at least the next month.

My hope is that this service makes it much easier for lab employees to have an informed understanding of labs’ safety-relevant actions, commitments, and responsibilities.


If you want to help—e.g. if maybe I should introduce lab-people to you—let me know.

You can give me anonymous feedback.


Crossposted from AI Lab Watch. Subscribe on Substack.