Exactly. And then one’s estimate of the actual impact depends on whether one believes Sutskever is one of the best people to lead an AI existential safety effort.
If one believes that, and if the outcome is that he ends up less likely to do so in the context of the leading AGI/ASI project, then the impact on safety might be very negative.
If one does not believe that he is one of the best people to lead this kind of effort, then one might think that the impact is not negative.
(I personally believe Ilya’s approach is one of the better ones, and it seems to me that he has been in the process of fixing the defects in the original OpenAI superalignment plan, and basically trying to gradually create a better plan, but people’s views on that might differ.)
Exactly. And then one’s estimate of the actual impact depends on whether one believes Sutskever is one of the best people to lead an AI existential safety effort.
If one believes that, and if the outcome is that he ends up less likely to do so in the context of the leading AGI/ASI project, then the impact on safety might be very negative.
If one does not believe that he is one of the best people to lead this kind of effort, then one might think that the impact is not negative.
(I personally believe Ilya’s approach is one of the better ones, and it seems to me that he has been in the process of fixing the defects in the original OpenAI superalignment plan, and basically trying to gradually create a better plan, but people’s views on that might differ.)