AI strategy & governance. ailabwatch.org. Looking for new projects.
Zach Stein-Perlman(Zachary Stein-Perlman)
DeepMind’s “Frontier Safety Framework” is weak and unambitious
DeepMind: Frontier Safety Framework
Added updates to the post:
Leike tweets, including:
I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.
I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.
These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
But over the past years, safety culture and processes have taken a backseat to shiny products.
Daniel Kokotajlo talks to Vox:
“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”
Kelsey Piper says:
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.
TechCrunch says:
requests for . . . compute were often denied, blocking the [Superalignment] team from doing their work [according to someone on the team].
Piper is back:
OpenAI . . . says that going forward, they *won’t* strip anyone of their equity for not signing the secret NDA.
(This is slightly good but OpenAI should free all past employees from their non-disparagement obligations.)
The commitment—”20% of the compute we’ve secured to date” (in July 2023), to be used “over the next four years”—may be quite little in 2027, with compute use increasing exponentially. I’m confused about why people think it’s a big commitment.
Two other executives left two weeks ago, but that’s not obviously safety-related.
Ilya Sutskever and Jan Leike resign from OpenAI
We’ve evaluated GPT-4o according to our Preparedness Framework and in line with our voluntary commitments. Our evaluations of cybersecurity, CBRN, persuasion, and model autonomy show that GPT-4o does not score above Medium risk in any of these categories. This assessment involved running a suite of automated and human evaluations throughout the model training process. We tested both pre-safety-mitigation and post-safety-mitigation versions of the model, using custom fine-tuning and prompts, to better elicit model capabilities.
GPT-4o has also undergone extensive external red teaming with 70+ external experts in domains such as social psychology, bias and fairness, and misinformation to identify risks that are introduced or amplified by the newly added modalities. We used these learnings to build out our safety interventions in order to improve the safety of interacting with GPT-4o. We will continue to mitigate new risks as they’re discovered.
[Edit after Simeon replied: I disagree with your interpretation that they’re being intentionally very deceptive. But I am annoyed by (1) them saying “We’ve evaluated GPT-4o according to our Preparedness Framework” when the PF doesn’t contain specific evals and (2) them taking credit for implementing their PF when they’re not meeting its commitments.]
How can you make the case that a model is safe to deploy? For now, you can do risk assessment and notice that it doesn’t have dangerous capabilities. What about in the future, when models do have dangerous capabilities? Here are four options:
Implement safety measures as a function of risk assessment results, such that the measures feel like they should be sufficient to abate the risks
This is mostly what Anthropic’s RSP does (at least so far — maybe it’ll change when they define ASL-4)
Use risk assessment techniques that evaluate safety given deployment safety practices
This is mostly what OpenAI’s PF is supposed to do (measure “post-mitigation risk”), but the details of their evaluations and mitigations are very unclear
Achieve alignment (and get strong evidence of that)
Related: RSPs, safety cases.
Maybe lots of risk comes from the lab using AIs internally to do AI development. The first two options are fine for preventing catastrophic misuse from external deployment but I worry they struggle to measure risks related to scheming and internal deployment.
Safety-wise, they claim to have run it through their Preparedness framework and the red-team of external experts.
I’m disappointed and I think they shouldn’t get much credit PF-wise: they haven’t published their evals, published a report on results, or even published a high-level “scorecard.” They are not yet meeting the commitments in their beta Preparedness Framework — some stuff is unclear but at the least publishing the scorecard is an explicit commitment.
(It’s now been six months since they published the beta PF!)
[Edit: not to say that we should feel much better if OpenAI was successfully implementing its PF—the thresholds are way too high and it says nothing about internal deployment.]
There should be points for how the organizations act wrt to legislation. In the SB 1047 bill that CAIS co-sponsored, we’ve noticed some AI companies to be much more antagonistic than others. I think [this] is probably a larger differentiator for an organization’s goodness or badness.
If there’s a good writeup on labs’ policy advocacy I’ll link to and maybe defer to it.
Adding to the confusion: I’ve nonpublicly heard from people at UKAISI and [OpenAI or Anthropic] that the Politico piece is very wrong and DeepMind isn’t the only lab doing pre-deployment sharing (and that it’s hard to say more because info about not-yet-deployed models is secret). But no clarification on commitments.
But everyone has lots of duties to keep secrets or preserve privacy and the ones put in writing often aren’t the most important. (E.g. in your case.)
I’ve signed ~3 NDAs. Most of them are irrelevant now and useless for people to know about, like yours.
I agree in special cases it would be good to flag such things — like agreements to not share your opinions on a person/org/topic, rather than just keeping trade secrets private.
Related: maybe a lab should get full points for a risky release if the lab says it’s releasing because the benefits of [informing / scaring / waking-up] people outweigh the direct risk of existential catastrophe and other downsides. It’s conceivable that a perfectly responsible lab would do such a thing.
Capturing all nuances can trade off against simplicity and legibility. (But my criteria are not yet on the efficient frontier or whatever.)
Thanks. I agree you’re pointing at something flawed in the current version and generally thorny. Strong-upvoted and strong-agreevoted.
Generally, the deployment criteria should be gated behind “has a plan to do this when models are actually powerful and their implementation of the plan is credible”.
I didn’t put much effort into clarifying this kind of thing because it’s currently moot—I don’t think it would change any lab’s score—but I agree.[1] I think e.g. a criterion “use KYC” should technically be replaced with “use KYC OR say/demonstrate that you’re prepared to implement KYC and have some capability/risk threshold to implement it and [that threshold isn’t too high].”
Don’t pass cost benefit for current models which pose low risk. (And it seems the criteria is “do you have them implemented right now?) . . . .
(A general problem with this project is somewhat arbitrarily requiring specific countermeasures. I think this is probably intrinsic to the approach I’m afraid.)
Yeah. The criteria can be like “implement them or demonstrate that you could implement them and have a good plan to do so,” but it would sometimes be reasonable for the lab to not have done this yet. (Especially for non-frontier labs; the deployment criteria mostly don’t work well for evaluating non-frontier labs. Also if demonstrating that you could implement something is difficult, even if you could implement it.)
I get the sense that this criteria doesn’t quite handle the necessarily edge cases to handle reasonable choices orgs might make.
I’m interested in suggestions :shrug:
- ^
And I think my site says some things that contradict this principle, like ‘these criteria require keeping weights private.’ Oops.
- ^
Two noncentral pages I like on the site:
Other scorecards & evaluation, collecting other safety-ish scorecard-ish resources.
Commitments, collecting AI companies’ commitments relevant to AI safety and extreme risks.
Yay @Zac Hatfield-Dodds of Anthropic for feedback and corrections including clarifying a couple of Anthropic’s policies. Two pieces of not-previously-public information:
I was disappointed that Anthropic’s Responsible Scaling Policy only mentions evaluation “During model training and fine-tuning.” Zac told me “this was a simple drafting error—our every-three months evaluation commitment is intended to continue during deployment. This has been clarified for the next version, and we’ve been acting accordingly all along.” Yay.
I said labs should have a “process for staff to escalate concerns about safety” and “have a process for staff and external stakeholders to share concerns about risk assessment policies or their implementation with the board and some other staff, including anonymously.” I noted that Anthropic’s RSP includes a commitment to “Implement a non-compliance reporting policy.” Zac told me “Beyond standard internal communications channels, our recently formalized non-compliance reporting policy meets these criteria [including independence], and will be described in the forthcoming RSP v1.1.” Yay.
I think it’s cool that Zac replied (but most of my questions for Anthropic remain).
I have not yet received substantive corrections/clarifications from any other labs.
(I have made some updates to ailabwatch.org based on Zac’s feedback—and revised Anthropic’s score from 45 to 48—but have not resolved all of it.)
I mostly agree. And I think when people say race dynamics they often actually mean speed of progress and especially “Effects of open models on ease of training closed models [and open models],” which you mention.
But here is a race-dynamics story:
Alice has the best open model. She prefers for AI progress to slow down but also prefers to have the best open model (for reasons of prestige or, if different companies’ models are not interchangeable, future market share). Bob releases a great open model. This incentivizes Alice to release a new state-of-the-art model sooner.
Yep, lots of people independently complain about “lab.” Some of those people want me to use scary words in other places too, like replacing “diffusion” with “proliferation.” I wouldn’t do that, and don’t replace “lab” with “mega-corp” or “juggernaut,” because it seems [incorrect / misleading / low-integrity].
I’m sympathetic to the complaint that “lab” is misleading. (And I do use “company” rather than “lab” occasionally, e.g. in the header.) But my friends usually talk about “the labs,” not “the companies.” But to most audiences “company” is more accurate.
I currently think “company” is about as good as “lab.” I may change the term throughout the site at some point.
This kind of feedback is very helpful to me; thank you! Strong-upvoted and weak-agreevoted.
(I have some factual disagreements. I may edit them into this comment later.)
(If you think Dan’s comment makes me suspect this project is full of issues/mistakes, react 💬 and I’ll consider writing a detailed soldier-ish reply.)
Yep
Two weeks ago I sent a senior DeepMind staff member some “Advice on RSPs, especially for avoiding ambiguities”; #1 on my list was “Clarify how your deployment commitments relate to internal deployment, not just external deployment” (since it’s easy and the OpenAI PF also did a bad job of this)
:(