I don’t think this is a fair consideration of the article’s entire message. This line from the article specifically calls out slowing down AI progress:
we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
Having spent a long time reading through OpenAI’s statements, I suspect that they are trying to strike a difficult balance between:
A) Doing the right thing by way of AGI safety (including considering options like slowing down or not releasing certain information and technology).
B) Staying at or close to the lead of the race to AGI, given they believe that is the position from which they can have the most positive impact in terms of changing the development path and broader conversation around AGI.
Instrumental goal (B) is in tension (but not necessarily stark conflict, depending on how things play out) with ultimate goal (A).
What they’re presenting here in this article are ways to potentially create situation where they could slow down and be confident that doing so wouldn’t actually lead to worse eventual outcomes for AGI safety. They are also trying to promote and escalate the societal conversation around AGI x-risk.
While I think it’s totally valid to criticise OAI on aspects of their approach to AGI safety, I think it’s also fair to say that they are genuinely trying to do the right thing and are simply struggling to chart what is ultimately a very difficult path.
Yeah I think my complaint is that OpenAI seems to be asserting almost a “boundary” re goal (B), like there’s nothing that trades off against staying at the front of the race, and they’re willing to pay large costs rather than risk being the second-most-impressive AI lab. Why? Things don’t add up.
(Example large cost: they’re not putting large organizational attention to the alignment problem. The alignment team projects don’t have many people working on them, they’re not doing things like inviting careful thinkers to evaluate their plans under secrecy, or taking any other bunch of obvious actions that come from putting serious resources into not blowing everyone up.)
I don’t buy that (B) is that important. It seems more driven by some strange status / narrative-power thing? And I haven’t ever seen them make an explicit their case for why they’re sacrificing so much for (B). Especially when a lot of their original safety people fucking left due to some conflict around this?
Broadly many things about their behaviour strike me as deceptive / making it hard to form a counternarrative / trying to conceal something odd about their plans.
One final question: why do they say “we think it would be good if an international agency limited compute growth” but not also “and we will obviously be trying to partner with other labs to do this ourselves in the meantime, although not if another lab is already training something more powerful than GPT-4″?
I don’t think this is a fair consideration of the article’s entire message. This line from the article specifically calls out slowing down AI progress:
Having spent a long time reading through OpenAI’s statements, I suspect that they are trying to strike a difficult balance between:
A) Doing the right thing by way of AGI safety (including considering options like slowing down or not releasing certain information and technology).
B) Staying at or close to the lead of the race to AGI, given they believe that is the position from which they can have the most positive impact in terms of changing the development path and broader conversation around AGI.
Instrumental goal (B) is in tension (but not necessarily stark conflict, depending on how things play out) with ultimate goal (A).
What they’re presenting here in this article are ways to potentially create situation where they could slow down and be confident that doing so wouldn’t actually lead to worse eventual outcomes for AGI safety. They are also trying to promote and escalate the societal conversation around AGI x-risk.
While I think it’s totally valid to criticise OAI on aspects of their approach to AGI safety, I think it’s also fair to say that they are genuinely trying to do the right thing and are simply struggling to chart what is ultimately a very difficult path.
Yeah I think my complaint is that OpenAI seems to be asserting almost a “boundary” re goal (B), like there’s nothing that trades off against staying at the front of the race, and they’re willing to pay large costs rather than risk being the second-most-impressive AI lab. Why? Things don’t add up.
(Example large cost: they’re not putting large organizational attention to the alignment problem. The alignment team projects don’t have many people working on them, they’re not doing things like inviting careful thinkers to evaluate their plans under secrecy, or taking any other bunch of obvious actions that come from putting serious resources into not blowing everyone up.)
I don’t buy that (B) is that important. It seems more driven by some strange status / narrative-power thing? And I haven’t ever seen them make an explicit their case for why they’re sacrificing so much for (B). Especially when a lot of their original safety people fucking left due to some conflict around this?
Broadly many things about their behaviour strike me as deceptive / making it hard to form a counternarrative / trying to conceal something odd about their plans.
One final question: why do they say “we think it would be good if an international agency limited compute growth” but not also “and we will obviously be trying to partner with other labs to do this ourselves in the meantime, although not if another lab is already training something more powerful than GPT-4″?