Sam Altman and OpenAI have both said they are aiming for incremental releases/deployment for the primary purpose of allowing society to prepare and adapt. Opposed to, say, dropping large capabilities jumps out of the blue which surprise people.
I think “They believe incremental release is safer because it promotes societal preparation” should certainly be in the hypothesis space for the reasons behind these actions, along with scaling slowing and frog-boiling. My guess is that it is more likely than both of those reasons (they have stated it as their reasoning multiple times; I don’t think scaling is hitting a wall).
Yeah, “they’re following their stated release strategy for the reasons they said motivated that strategy” also seems likely to share some responsibility. (I might not think those reasons justify that release strategy, but that’s a different argument.)
I wonder if that is actually a sound view though. I just started reading Like War (interesting and seems correct/on target so far but really just starting it). Given the subject area of impact, reaction and use of social media and networking technologies and the general results socially, seems like society generally is not really even yet prepared and adapted for that inovation. If all the fears about AI are even close to getting things right I suspect the “allowing society to prepare and adapt” suggests putting everything on hold, freezing in place, for at least a decade and probably longer.
Altman’s and OpenAI’s intentions might be towards that stated goal but I think they are basing that approach on how “the smartest people in the room” react to AI and not the general public, or the most opportinistic people in the room.
Sam Altman and OpenAI have both said they are aiming for incremental releases/deployment for the primary purpose of allowing society to prepare and adapt. Opposed to, say, dropping large capabilities jumps out of the blue which surprise people.
I think “They believe incremental release is safer because it promotes societal preparation” should certainly be in the hypothesis space for the reasons behind these actions, along with scaling slowing and frog-boiling. My guess is that it is more likely than both of those reasons (they have stated it as their reasoning multiple times; I don’t think scaling is hitting a wall).
Yeah, “they’re following their stated release strategy for the reasons they said motivated that strategy” also seems likely to share some responsibility. (I might not think those reasons justify that release strategy, but that’s a different argument.)
I wonder if that is actually a sound view though. I just started reading Like War (interesting and seems correct/on target so far but really just starting it). Given the subject area of impact, reaction and use of social media and networking technologies and the general results socially, seems like society generally is not really even yet prepared and adapted for that inovation. If all the fears about AI are even close to getting things right I suspect the “allowing society to prepare and adapt” suggests putting everything on hold, freezing in place, for at least a decade and probably longer.
Altman’s and OpenAI’s intentions might be towards that stated goal but I think they are basing that approach on how “the smartest people in the room” react to AI and not the general public, or the most opportinistic people in the room.