I also wanna echo Akash’s “this seems like a good document to exist.” I appreciate having Sam’s worldview laid out more clearly. I’m glad it includes explicit mentions of and some reasonable front-and-centering of x-risk (AFAICT DeepMind hasn’t done this, although I might have missed it?).
In some sense these are still “cheap words”, but, I do think it’s still much better for company leadership to have explicitly stated their worldview and it including x-risk.
I do still obviously disagree with some key beliefs and frames here. From my perspective, focusing on what the public wants, and democratization, is quite weird. But, I’m at least somewhat sympathetic to “Sam needs to be reassuring multiple stakeholders here, the public is in fact a stakeholder for the strategy that he’s pursuing.”
I feel fairly sympathetic to “we need to iterate on systems and gain experience in the real world.” I don’t know that I agree, but when faced with “we’re going to die because we can’t iterate the way we usually do”, I think it’s not crazy to resolve that in the direction of “I guess we really need to figure out how to iterate” rather than “I guess we need to figure out everything from first principles.”
Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.
I don’t know that I trust the cognitive process generating this. (I wouldn’t be at all surprised if this is a post-hoc rationalization of the thing Sam wanted to do anyway)
But, for now taking it at face value, this seems like the biggest crux. I think it’s not completely crazy—I remember someone (I thought @Eli Tyre but now I can’t find the relevant post) asking a few years ago whether shorter timelines might actually be better for coordination reasons and that being a major update for me. Whoever the author was it wasn’t someone running a giant AI lab.
But my sense is that ChatGPT did massively accelerate race dynamics. And while I maybe buy that “short timelines can be good for coordination”, that’s only true if you’re actually trying to handle that coordination well. It seems like releasing ChatGPT shortens timelines while also making coordination worse. And meanwhile the profit incentive makes me very suspicious that this was actually for good reasons.
OpenAI has done some things that are better-than-nothing for coordination, but not nearly strong enough to actually address the major downsides of accelerating race dynamics.
I wouldn’t be at all surprised if this is a post-hoc rationalization of the thing Sam wanted to do anyway
Altman’s recent fundraising for AI chip manufacturing would increase available AI compute. Is there a good reason for why someone worried about a compute overhang would be accelerating the development of AI chips? If not, I think that it being a post-hoc rationalization is a good explanation.
It doesn’t seem like “shorter timelines” in the safest quadrant has much to do with their current strategy, as they have a gpt-4 paper section on how they postponed the release to reduce acceleration.
I also wanna echo Akash’s “this seems like a good document to exist.” I appreciate having Sam’s worldview laid out more clearly. I’m glad it includes explicit mentions of and some reasonable front-and-centering of x-risk (AFAICT DeepMind hasn’t done this, although I might have missed it?).
In some sense these are still “cheap words”, but, I do think it’s still much better for company leadership to have explicitly stated their worldview and it including x-risk.
I do still obviously disagree with some key beliefs and frames here. From my perspective, focusing on what the public wants, and democratization, is quite weird. But, I’m at least somewhat sympathetic to “Sam needs to be reassuring multiple stakeholders here, the public is in fact a stakeholder for the strategy that he’s pursuing.”
I feel fairly sympathetic to “we need to iterate on systems and gain experience in the real world.” I don’t know that I agree, but when faced with “we’re going to die because we can’t iterate the way we usually do”, I think it’s not crazy to resolve that in the direction of “I guess we really need to figure out how to iterate” rather than “I guess we need to figure out everything from first principles.”
I don’t know that I trust the cognitive process generating this. (I wouldn’t be at all surprised if this is a post-hoc rationalization of the thing Sam wanted to do anyway)
But, for now taking it at face value, this seems like the biggest crux. I think it’s not completely crazy—I remember someone (I thought @Eli Tyre but now I can’t find the relevant post) asking a few years ago whether shorter timelines might actually be better for coordination reasons and that being a major update for me. Whoever the author was it wasn’t someone running a giant AI lab.
But my sense is that ChatGPT did massively accelerate race dynamics. And while I maybe buy that “short timelines can be good for coordination”, that’s only true if you’re actually trying to handle that coordination well. It seems like releasing ChatGPT shortens timelines while also making coordination worse. And meanwhile the profit incentive makes me very suspicious that this was actually for good reasons.
OpenAI has done some things that are better-than-nothing for coordination, but not nearly strong enough to actually address the major downsides of accelerating race dynamics.
Altman’s recent fundraising for AI chip manufacturing would increase available AI compute. Is there a good reason for why someone worried about a compute overhang would be accelerating the development of AI chips? If not, I think that it being a post-hoc rationalization is a good explanation.
See also here for further discussion.
It doesn’t seem like “shorter timelines” in the safest quadrant has much to do with their current strategy, as they have a gpt-4 paper section on how they postponed the release to reduce acceleration.