Microsoft is the sort of corporate bureaucracy where dynamic orgs/founders/researchers go to die. My median expectation is that whatever former OpenAI group ends up there will be far less productive than they were at OpenAI.
I’m a bit sceptical of that. You gave some reasonable arguments, but all of this should be known to Sam Altman, and he still chose to accept Microsoft’s offer instead of founding his own org (I’m assuming he would easily able to raise a lot of money). So, given that “how productive are the former OpenAI folks at Microsoft?” is the crux of the argument, it seems that recent events are good news iff Sam Altman made a big mistake with that decision.
recent events are good news iff Sam Altman made a big mistake with that decision
Or if Sam Altman isn’t actually primarily motivated by the desire to build an AGI, as opposed to standard power-/profit-maximization motives. Accelerationists are now touting him as their messiah, and he’d obviously always been happy to generate hype about OpenAI’s business vision. But it’s not necessarily the case that it translates into him actually believing, at the gut level, that the best way to maximize prosperity/power is to build an AGI.
He may realize that an exodus into Microsoft would cripple OpenAI talent’s ability to be productive, and do it anyway, because it offers him personally better political opportunities for growth.
It doesn’t even have to be a dichotomy of “total AGI believer” vs “total simulacrum-level-4 power-maximizer”. As long as myopic political motives have a significant-enough stake in his thinking, they may lead one astray.
“Doomers vs. Accelerationists” is one frame on this conflict, but it may not be the dominant one.
“Short-sighted self-advancement vs. Long-term vision” is another, and a more fundamental one. Moloch favours capabilities over alignment, so it usually hands the victory to the accelerationists. But that only goes inasmuch as accelerationists’ motives coincide with short-sighted power-maximization. The moment there’s an even shorter-sighted way for things to go, an even lower energy-state to fall into, Moloch would cast capability-pursuit aside.
The current events may (or may not!) be an instance of that.
He was involved in the rationalist circle for a long time iirc. His said social status would still matter in a post AGI world so I suspect his true goal is either being known forever as the person who came across AGI (status) or immortality related.
I’m a bit sceptical of that. You gave some reasonable arguments, but all of this should be known to Sam Altman, and he still chose to accept Microsoft’s offer instead of founding his own org (I’m assuming he would easily able to raise a lot of money). So, given that “how productive are the former OpenAI folks at Microsoft?” is the crux of the argument, it seems that recent events are good news iff Sam Altman made a big mistake with that decision.
Or if Sam Altman isn’t actually primarily motivated by the desire to build an AGI, as opposed to standard power-/profit-maximization motives. Accelerationists are now touting him as their messiah, and he’d obviously always been happy to generate hype about OpenAI’s business vision. But it’s not necessarily the case that it translates into him actually believing, at the gut level, that the best way to maximize prosperity/power is to build an AGI.
He may realize that an exodus into Microsoft would cripple OpenAI talent’s ability to be productive, and do it anyway, because it offers him personally better political opportunities for growth.
It doesn’t even have to be a dichotomy of “total AGI believer” vs “total simulacrum-level-4 power-maximizer”. As long as myopic political motives have a significant-enough stake in his thinking, they may lead one astray.
“Doomers vs. Accelerationists” is one frame on this conflict, but it may not be the dominant one.
“Short-sighted self-advancement vs. Long-term vision” is another, and a more fundamental one. Moloch favours capabilities over alignment, so it usually hands the victory to the accelerationists. But that only goes inasmuch as accelerationists’ motives coincide with short-sighted power-maximization. The moment there’s an even shorter-sighted way for things to go, an even lower energy-state to fall into, Moloch would cast capability-pursuit aside.
The current events may (or may not!) be an instance of that.
He was involved in the rationalist circle for a long time iirc. His said social status would still matter in a post AGI world so I suspect his true goal is either being known forever as the person who came across AGI (status) or immortality related.