I tend to view the events of OpenAI’s firing of Sam Altman much more ambiguously than others, and IMO, it probably balances out to nothing in the end, so I don’t care as much as some other people here.
To respond more substantially:
From johnswentworth:
Here’s the high-gloss version of my take. The main outcomes are:
The leadership who were relatively most focused on racing to AGI and least focused on safety are moving from OpenAI to Microsoft. Lots of employees who are relatively more interested in racing to AGI than in safety will probably follow.
Microsoft is the sort of corporate bureaucracy where dynamic orgs/founders/researchers go to die. My median expectation is that whatever former OpenAI group ends up there will be far less productive than they were at OpenAI.
It’s an open question whether OpenAI will stick around at all.
Insofar as they do, they’re much less likely to push state-of-the-art in capabilities, and much more likely to focus on safety research.
Insofar as they shut down, the main net result will be a bunch of people who were relatively more interested in racing to AGI and less focused on safety moving to Microsoft, which is great.
I agree with a rough version of the claim that they might be absorbed into Microsoft, thus making it less likely to advance capabilities, and this is plausibly at least somewhat important.
My main disagreement here is that I don’t think that capabilities advances matter as much as LWers think for AI doom, and may even be anti-helpful to slow down, depending on the circumstances. This probably comes down to very different views on stuff like how strong do the priors need to be, etc.
From johnswentworth:
There’s apparently been a lot of EA-hate on twitter as a result. I personally expect this to matter very little, if at all, in the long run, but I’d expect it to be extremely disproportionately salient to rationalists/EAs/alignment folk.
I actually think this partially matters, but the trickiness here is that on the one hand, twitter can be important, but I also agree that people overrate it a lot here.
My main disagreement tends to be that I don’t think OpenAI actually matters too much in the capabilities race, and I think that social stuff matters more than John Wentworth thinks. Also, given my optimistic world-model on alignment, corporate drama like this mostly doesn’t matter.
One final thought: I feel like the AGI clauses in the OpenAI’s charter weew extremely terrible, because AGI is very ill-defined, and in a corporate setting/court setting, this is a very bad basis to build upon. They need to use objective metrics that are verifiable if they want to deal with dangerous AI. More generally, I kind of hate the AGI concept, for lots of reasons.
I tend to view the events of OpenAI’s firing of Sam Altman much more ambiguously than others, and IMO, it probably balances out to nothing in the end, so I don’t care as much as some other people here.
To respond more substantially:
From johnswentworth:
I agree with a rough version of the claim that they might be absorbed into Microsoft, thus making it less likely to advance capabilities, and this is plausibly at least somewhat important.
My main disagreement here is that I don’t think that capabilities advances matter as much as LWers think for AI doom, and may even be anti-helpful to slow down, depending on the circumstances. This probably comes down to very different views on stuff like how strong do the priors need to be, etc.
From johnswentworth:
I actually think this partially matters, but the trickiness here is that on the one hand, twitter can be important, but I also agree that people overrate it a lot here.
My main disagreement tends to be that I don’t think OpenAI actually matters too much in the capabilities race, and I think that social stuff matters more than John Wentworth thinks. Also, given my optimistic world-model on alignment, corporate drama like this mostly doesn’t matter.
One final thought: I feel like the AGI clauses in the OpenAI’s charter weew extremely terrible, because AGI is very ill-defined, and in a corporate setting/court setting, this is a very bad basis to build upon. They need to use objective metrics that are verifiable if they want to deal with dangerous AI. More generally, I kind of hate the AGI concept, for lots of reasons.