recent events are good news iff Sam Altman made a big mistake with that decision
Or if Sam Altman isn’t actually primarily motivated by the desire to build an AGI, as opposed to standard power-/profit-maximization motives. Accelerationists are now touting him as their messiah, and he’d obviously always been happy to generate hype about OpenAI’s business vision. But it’s not necessarily the case that it translates into him actually believing, at the gut level, that the best way to maximize prosperity/power is to build an AGI.
He may realize that an exodus into Microsoft would cripple OpenAI talent’s ability to be productive, and do it anyway, because it offers him personally better political opportunities for growth.
It doesn’t even have to be a dichotomy of “total AGI believer” vs “total simulacrum-level-4 power-maximizer”. As long as myopic political motives have a significant-enough stake in his thinking, they may lead one astray.
“Doomers vs. Accelerationists” is one frame on this conflict, but it may not be the dominant one.
“Short-sighted self-advancement vs. Long-term vision” is another, and a more fundamental one. Moloch favours capabilities over alignment, so it usually hands the victory to the accelerationists. But that only goes inasmuch as accelerationists’ motives coincide with short-sighted power-maximization. The moment there’s an even shorter-sighted way for things to go, an even lower energy-state to fall into, Moloch would cast capability-pursuit aside.
The current events may (or may not!) be an instance of that.
He was involved in the rationalist circle for a long time iirc. His said social status would still matter in a post AGI world so I suspect his true goal is either being known forever as the person who came across AGI (status) or immortality related.
Or if Sam Altman isn’t actually primarily motivated by the desire to build an AGI, as opposed to standard power-/profit-maximization motives. Accelerationists are now touting him as their messiah, and he’d obviously always been happy to generate hype about OpenAI’s business vision. But it’s not necessarily the case that it translates into him actually believing, at the gut level, that the best way to maximize prosperity/power is to build an AGI.
He may realize that an exodus into Microsoft would cripple OpenAI talent’s ability to be productive, and do it anyway, because it offers him personally better political opportunities for growth.
It doesn’t even have to be a dichotomy of “total AGI believer” vs “total simulacrum-level-4 power-maximizer”. As long as myopic political motives have a significant-enough stake in his thinking, they may lead one astray.
“Doomers vs. Accelerationists” is one frame on this conflict, but it may not be the dominant one.
“Short-sighted self-advancement vs. Long-term vision” is another, and a more fundamental one. Moloch favours capabilities over alignment, so it usually hands the victory to the accelerationists. But that only goes inasmuch as accelerationists’ motives coincide with short-sighted power-maximization. The moment there’s an even shorter-sighted way for things to go, an even lower energy-state to fall into, Moloch would cast capability-pursuit aside.
The current events may (or may not!) be an instance of that.
He was involved in the rationalist circle for a long time iirc. His said social status would still matter in a post AGI world so I suspect his true goal is either being known forever as the person who came across AGI (status) or immortality related.