From reading the first 29 min of the transcript, my impression is: he is strong enough to lead an org to an AGI (it seems many people are strong enough to do this from our current level, the conversation does seem to show that we are pretty close), but I don’t get the feeling that he is strong enough to deal with issues related to AI existential safety. At least, that’s what my initial impression is :-(
This interview was terrifying to me (and I think to Dwarkesh as well), Schulman continually demonstrates that he hasn’t really thought about the AGI future scenarios in that much depth and sort of handwaves away any talk of future dangers.
Right off the bat he acknowledges that they reasonably expect AGI in 1-5 years or so, and even though Dwarkesh pushes him he doesn’t present any more detailed plan for safety than “Oh we’ll need to be careful and cooperate with the other companies...I guess...”
Here is my coverage of it. Given this is a ‘day minus one’ interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don’t want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
[Edit: watched the full interview with John and Dwarkesh. John seems kinda nervous, caught a bit unprepared to answer questions about how OpenAI might work on alignment. Most of the interesting thoughts he put forward for future work were about capabilities. Hopefully he does delve deeper into alignment work if he’s going to remain in charge of it at OpenAI.]
The podcast is here: https://www.dwarkeshpatel.com/p/john-schulman?initial_medium=video
From reading the first 29 min of the transcript, my impression is: he is strong enough to lead an org to an AGI (it seems many people are strong enough to do this from our current level, the conversation does seem to show that we are pretty close), but I don’t get the feeling that he is strong enough to deal with issues related to AI existential safety. At least, that’s what my initial impression is :-(
This interview was terrifying to me (and I think to Dwarkesh as well), Schulman continually demonstrates that he hasn’t really thought about the AGI future scenarios in that much depth and sort of handwaves away any talk of future dangers.
Right off the bat he acknowledges that they reasonably expect AGI in 1-5 years or so, and even though Dwarkesh pushes him he doesn’t present any more detailed plan for safety than “Oh we’ll need to be careful and cooperate with the other companies...I guess...”
Here is my coverage of it. Given this is a ‘day minus one’ interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don’t want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
I have so much more confidence in Jan and Ilya. Hopefully they go somewhere to work on AI alignment together. The critical time seems likely to be soon. See this clip from an interview with Jan: https://youtube.com/clip/UgkxFgl8Zw2bFKBtS8BPrhuHjtODMNCN5E7H?si=JBw5ZUylexeR43DT
[Edit: watched the full interview with John and Dwarkesh. John seems kinda nervous, caught a bit unprepared to answer questions about how OpenAI might work on alignment. Most of the interesting thoughts he put forward for future work were about capabilities. Hopefully he does delve deeper into alignment work if he’s going to remain in charge of it at OpenAI.]