we both agree it would not make sense to model OpenAI as part of the same power base
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.
Hmm, I’m not totally sure. At various points:
OpenAI was the most prominent group talking publicly about AI risk
Sam Altman was the most prominent person talking publicly about large-scale AI regulation
A bunch of safety-minded people at OpenAI were doing OpenAI’s best capabilities work (GPT-2, GPT-3)
A bunch of safety-minded people worked on stuff that led to ChatGPT (RLHF, John Schulman’s team in general)
Elon tried to take over, and the people who opposed that were (I’m guessing) a coalition of safety people and the rest of OpenAI
It’s really hard to step out of our own perspective here, but when I put myself in the perspective of, say, someone who doesn’t believe in AGI at all, these all seem pretty indicative of a situation where OpenAI and AI safety people were to a significant extent building a shared power base, and just couldn’t keep that power base together.