What do we know about those other departures? The NYT article has this:
Jan Leike, who ran the Super Alignment team alongside Dr. Sutskever, has also resigned from OpenAI. His role will be taken by John Schulman, another company co-founder.
I have not been able to find any other traces of this information yet.
Leopold Aschenbrenner still lists OpenAI as his affiliation everywhere I see. The only recent traces of his activity seem to be likes on Twitter: https://twitter.com/leopoldasch/likes
From reading the first 29 min of the transcript, my impression is: he is strong enough to lead an org to an AGI (it seems many people are strong enough to do this from our current level, the conversation does seem to show that we are pretty close), but I don’t get the feeling that he is strong enough to deal with issues related to AI existential safety. At least, that’s what my initial impression is :-(
This interview was terrifying to me (and I think to Dwarkesh as well), Schulman continually demonstrates that he hasn’t really thought about the AGI future scenarios in that much depth and sort of handwaves away any talk of future dangers.
Right off the bat he acknowledges that they reasonably expect AGI in 1-5 years or so, and even though Dwarkesh pushes him he doesn’t present any more detailed plan for safety than “Oh we’ll need to be careful and cooperate with the other companies...I guess...”
Here is my coverage of it. Given this is a ‘day minus one’ interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don’t want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
[Edit: watched the full interview with John and Dwarkesh. John seems kinda nervous, caught a bit unprepared to answer questions about how OpenAI might work on alignment. Most of the interesting thoughts he put forward for future work were about capabilities. Hopefully he does delve deeper into alignment work if he’s going to remain in charge of it at OpenAI.]
Ilya departure is momentous.
What do we know about those other departures? The NYT article has this:
I have not been able to find any other traces of this information yet.
We do know that Pavel Izmailov has joined xAI: https://izmailovpavel.github.io/
Leopold Aschenbrenner still lists OpenAI as his affiliation everywhere I see. The only recent traces of his activity seem to be likes on Twitter: https://twitter.com/leopoldasch/likes
Jan Leike confirms: https://twitter.com/janleike/status/1790603862132596961
Dwarkesh is supposed to release his podcast with John Schulman today, so we can evaluate the quality of his thinking more closely (he is mostly known for reinforcement learning, https://scholar.google.com/citations?user=itSa94cAAAAJ&hl=en, although he has some track record of safety-related publications, including Unsolved Problems in ML Safety, 2021-2022, https://arxiv.org/abs/2109.13916 and Let’s Verify Step by Step, https://arxiv.org/abs/2305.20050 which includes Jan Leike and Ilya Sutskever among its co-authors).
No confirmation of him becoming the new head of Superalignment yet...
The podcast is here: https://www.dwarkeshpatel.com/p/john-schulman?initial_medium=video
From reading the first 29 min of the transcript, my impression is: he is strong enough to lead an org to an AGI (it seems many people are strong enough to do this from our current level, the conversation does seem to show that we are pretty close), but I don’t get the feeling that he is strong enough to deal with issues related to AI existential safety. At least, that’s what my initial impression is :-(
This interview was terrifying to me (and I think to Dwarkesh as well), Schulman continually demonstrates that he hasn’t really thought about the AGI future scenarios in that much depth and sort of handwaves away any talk of future dangers.
Right off the bat he acknowledges that they reasonably expect AGI in 1-5 years or so, and even though Dwarkesh pushes him he doesn’t present any more detailed plan for safety than “Oh we’ll need to be careful and cooperate with the other companies...I guess...”
Here is my coverage of it. Given this is a ‘day minus one’ interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don’t want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
I have so much more confidence in Jan and Ilya. Hopefully they go somewhere to work on AI alignment together. The critical time seems likely to be soon. See this clip from an interview with Jan: https://youtube.com/clip/UgkxFgl8Zw2bFKBtS8BPrhuHjtODMNCN5E7H?si=JBw5ZUylexeR43DT
[Edit: watched the full interview with John and Dwarkesh. John seems kinda nervous, caught a bit unprepared to answer questions about how OpenAI might work on alignment. Most of the interesting thoughts he put forward for future work were about capabilities. Hopefully he does delve deeper into alignment work if he’s going to remain in charge of it at OpenAI.]
He removed the mention of xAI, and listed Anthropic as his next job (in all his platforms).
His CV says that he is doing Scalable Oversight at Anthropic https://izmailovpavel.github.io/files/CV.pdf
His LinkedIn also states the end of his OpenAI employment as May 2024: https://www.linkedin.com/in/pavel-izmailov-8b012b258/
His twitter does say “ex-superalignment” now: https://x.com/leopoldasch?lang=en
Leopold and Pavel were out (“fired for allegedly leaking information”) in April. https://www.silicon.co.uk/e-innovation/artificial-intelligence/openai-fires-researchers-558601