Well, if Dustin sees no problem in talking about it, and it’s become a major policy concern, then I guess I should disclose that I spent a while talking with Dario back in late October 2022 (ie. pre-RSP in Sept 2023), and we discussed Anthropic’s scaling policy at some length, and I too came away with the same impression everyone else seems to have: that Anthropic’s AI-arms-race policy was to invest heavily in scaling, creating models at or pushing the frontier to do safety research on, but that they would only release access to second-best models & would not ratchet capabilities up, and it would wait for someone else to do so before catching up. So it would not contribute to races but not fall behind and become irrelevant/noncompetitive.
And Anthropic’s release of Claude-1 and Claude-2 always seemed to match that policy—even if Claude-2 had a larger context window for a long time than any other decent available model, Claude-2 was still substantially weaker than ChatGPT-4. (Recall that the causus belli for Sam Altman trying to fire Helen Toner from the OA board was a passing reference in a co-authored paper to Anthropic not pushing the frontier like OA did.)
What I’m concluding from the discussion so far is that I should have read the Anthropic RSP more carefully than I did.
Anthropic is in little need of ideas from me, but yeah, I’ll probably pause such things for a while. I’m not saying the RSP is bad, but I’d like to see how things work out.
They indeed did not advance the frontier with this launch (at least not meaningfully, possibly not at all). But “meaningfully advance the frontier” is quite different from both “stay on the frontier” or “slightly push the envelope while creating marketing hype”, which is what I think is going on here?
Yeah, seems plausible; but either way it seems worth noting that Dario left Dustin, Evan and Anthropic’s investors with quite different impressions here.
I interpreted you, in your previous comment, as claiming that Anthropic’s RSP is explicit in its compatibility with meaningfully pushing the frontier. Dustin is under the impression that Anthropic verbally committed otherwise. Whether or not Claude 3 pushed the frontier seems somewhat orthogonal to this question—did Anthropic commit and/or heavily imply that they weren’t going to push the frontier, and if so, does the RSP quietly contradict that commitment? My current read is that the answer to both questions is yes. If this is the case, I think that Anthropic has been pretty misleading about a crucial part of their safety plan, and this seems quite bad to me.
I think that you’re correct that Anthropic at least heavily implied that they weren’t going to “meaningfully advance” the frontier (even if they have not made any explicit commitments about this). I’d be interested in hearing when Dustin had this conversation w/ Dario—was it pre or post RSP release?
And as far as I know, the only commitments they’ve made explicitly are in their RSP, which commits to limiting their ability to scale to the rate at which they can advance and deploy safety measures. It’s unclear if the “sufficient safety measures” limitation is the only restriction on scaling, but I would be surprised if anyone senior Anthropic was willing to make a concrete unilateral commitment to stay behind the curve.
My current story based on public info is, up until mid 2022, there was indeed an intention to stay at the frontier but not push it forward significantly. This changed sometime in late 2022-early 2023, maybe after ChatGPT released and the AGI race became somewhat “hot”.
I feel some kinda missing mood in these comments. It seems like you’re saying “Anthropic didn’t make explicit commitments here”, and that you’re not weighting as particularly important whether they gave people different impressions, or benefited from that.
(AFAICT you haven’t explicitly stated “that’s not a big deal”, but, it’s the vibe I get from your comments. Is that something you’re intentionally implying, or do you think of yourself as mostly just trying to be clear on the factual claims, or something like that?)
It seems Dario left Dustin Moskovitz with a different impression—that Anthropic had a policy/commitment to not meaningfully advance the frontier:
Well, if Dustin sees no problem in talking about it, and it’s become a major policy concern, then I guess I should disclose that I spent a while talking with Dario back in late October 2022 (ie. pre-RSP in Sept 2023), and we discussed Anthropic’s scaling policy at some length, and I too came away with the same impression everyone else seems to have: that Anthropic’s AI-arms-race policy was to invest heavily in scaling, creating models at or pushing the frontier to do safety research on, but that they would only release access to second-best models & would not ratchet capabilities up, and it would wait for someone else to do so before catching up. So it would not contribute to races but not fall behind and become irrelevant/noncompetitive.
And Anthropic’s release of Claude-1 and Claude-2 always seemed to match that policy—even if Claude-2 had a larger context window for a long time than any other decent available model, Claude-2 was still substantially weaker than ChatGPT-4. (Recall that the causus belli for Sam Altman trying to fire Helen Toner from the OA board was a passing reference in a co-authored paper to Anthropic not pushing the frontier like OA did.)
What I’m concluding from the discussion so far is that I should have read the Anthropic RSP more carefully than I did.
I hear you sometimes share dual-use (or plain capabilities?) ideas with Anthropic. If that’s true, does this change your policy?
Anthropic is in little need of ideas from me, but yeah, I’ll probably pause such things for a while. I’m not saying the RSP is bad, but I’d like to see how things work out.
They indeed did not advance the frontier with this launch (at least not meaningfully, possibly not at all). But “meaningfully advance the frontier” is quite different from both “stay on the frontier” or “slightly push the envelope while creating marketing hype”, which is what I think is going on here?
Yeah, seems plausible; but either way it seems worth noting that Dario left Dustin, Evan and Anthropic’s investors with quite different impressions here.
I interpreted you, in your previous comment, as claiming that Anthropic’s RSP is explicit in its compatibility with meaningfully pushing the frontier. Dustin is under the impression that Anthropic verbally committed otherwise. Whether or not Claude 3 pushed the frontier seems somewhat orthogonal to this question—did Anthropic commit and/or heavily imply that they weren’t going to push the frontier, and if so, does the RSP quietly contradict that commitment? My current read is that the answer to both questions is yes. If this is the case, I think that Anthropic has been pretty misleading about a crucial part of their safety plan, and this seems quite bad to me.
I think that you’re correct that Anthropic at least heavily implied that they weren’t going to “meaningfully advance” the frontier (even if they have not made any explicit commitments about this). I’d be interested in hearing when Dustin had this conversation w/ Dario—was it pre or post RSP release?
And as far as I know, the only commitments they’ve made explicitly are in their RSP, which commits to limiting their ability to scale to the rate at which they can advance and deploy safety measures. It’s unclear if the “sufficient safety measures” limitation is the only restriction on scaling, but I would be surprised if anyone senior Anthropic was willing to make a concrete unilateral commitment to stay behind the curve.
My current story based on public info is, up until mid 2022, there was indeed an intention to stay at the frontier but not push it forward significantly. This changed sometime in late 2022-early 2023, maybe after ChatGPT released and the AGI race became somewhat “hot”.
I feel some kinda missing mood in these comments. It seems like you’re saying “Anthropic didn’t make explicit commitments here”, and that you’re not weighting as particularly important whether they gave people different impressions, or benefited from that.
(AFAICT you haven’t explicitly stated “that’s not a big deal”, but, it’s the vibe I get from your comments. Is that something you’re intentionally implying, or do you think of yourself as mostly just trying to be clear on the factual claims, or something like that?)