For me the crux is influence of these events on Sutskever ending up sufficiently in charge of a leading AGI project. It appeared borderline true before; it would’ve become even more true than that if Altman’s firing stuck without disrupting OpenAI overall; and right now with the strike/ultimatum letter it seems less likely than ever (whether he stays in an Altman org or goes elsewhere).
(It’s ambiguous if Anthropic is at all behind, and then there’s DeepMind that’s already in the belly of Big Tech, so I don’t see how timelines noticeably change.)
Exactly. And then one’s estimate of the actual impact depends on whether one believes Sutskever is one of the best people to lead an AI existential safety effort.
If one believes that, and if the outcome is that he ends up less likely to do so in the context of the leading AGI/ASI project, then the impact on safety might be very negative.
If one does not believe that he is one of the best people to lead this kind of effort, then one might think that the impact is not negative.
(I personally believe Ilya’s approach is one of the better ones, and it seems to me that he has been in the process of fixing the defects in the original OpenAI superalignment plan, and basically trying to gradually create a better plan, but people’s views on that might differ.)
For me the crux is influence of these events on Sutskever ending up sufficiently in charge of a leading AGI project. It appeared borderline true before; it would’ve become even more true than that if Altman’s firing stuck without disrupting OpenAI overall; and right now with the strike/ultimatum letter it seems less likely than ever (whether he stays in an Altman org or goes elsewhere).
(It’s ambiguous if Anthropic is at all behind, and then there’s DeepMind that’s already in the belly of Big Tech, so I don’t see how timelines noticeably change.)
Exactly. And then one’s estimate of the actual impact depends on whether one believes Sutskever is one of the best people to lead an AI existential safety effort.
If one believes that, and if the outcome is that he ends up less likely to do so in the context of the leading AGI/ASI project, then the impact on safety might be very negative.
If one does not believe that he is one of the best people to lead this kind of effort, then one might think that the impact is not negative.
(I personally believe Ilya’s approach is one of the better ones, and it seems to me that he has been in the process of fixing the defects in the original OpenAI superalignment plan, and basically trying to gradually create a better plan, but people’s views on that might differ.)