The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we’re seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.
Yeah but if this is the case, I’d have liked to see a bit more balance than just retweeting the tribal-affiliation slogan (“OpenAI is nothing without its people”) and saying that the board should resign (or, in Ilya’s case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it’s a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won’t get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like “the board should resign, but here are some things that I think they had a point about, which I’d like to see to not get shrugged under the carpet after the counter-revolution.”
It’s too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI’s capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.
I’m not sure this is an unconditional surrender. They’re not talking about changing the charter, just appointing a new board. If the new board isn’t much less safety conscious, then a good bit of the organization’s original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.
AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it’s not exactly an inspiring story for OpenAI’s governance structure.
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.
The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place.
If you pushed for fire sprinklers to be installed, then yell “FIRE”, and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don’t think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
Keep in mind that the announcement was not something like
After careful consideration and strategic review, the Board of Directors has decided to initiate a leadership transition. Sam Altman will be stepping down from his/her role, effective November 17, 2023. This decision is a result of mutual agreement and understanding that the company’s long-term strategy and core values require a different kind of leadership moving forward.
Instead, the board announced
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
That is corporate speak for “Sam Altman was a lying liar about something big enough to put the entire project at risk, and as such we need to cut ties with him immediately and also warn everyone who might work with him that he was a lying liar.” If you make accusations like that, and don’t back them up, I don’t think you get to be outraged that people start doubting your judgement.
If you pushed for fire sprinklers to be installed, then yell “FIRE”, and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don’t think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board’s actions was not even ‘taking away your ability to trigger the fire sprinklers’ but ‘going off and living in a new building somewhere else that you can’t flood for lulz’.
As I’m understanding the situation OpenAI’s board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft. If they decide they would rather negotiate from their starting point of ‘being in charge of an empty building’ to ‘making concessions’ this doesn’t mean that the charter didn’t mean anything! It means that the charter gave them a bunch of power which they wasted.
I keep being confused by them not revealing their reasons. Whatever they are, there’s no way that saying them out loud wouldn’t give some ammo to those defending them, unless somehow between Friday and now they swung from “omg this is so serious we need to fire Altman NOW” to “oops looks like it was a nothingburger, we’ll look stupid if we say it out loud”. Do they think it’s a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?
Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change.
No, I don’t think the board’s motives were power politics; I’m saying that they failed to account for the kind of political power moves that Sam would make in response.
It’s hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be “the band-aid being ripped off”.
The most likely explanation I can think of, for what look like about-faces by Ilya and Jan this morning, is realizing that the worst plausible outcome is exactly what we’re seeing: Sam running a new OpenAI at Microsoft, free of that pesky charter. Any amount of backpedaling, and even resigning in favor of a less safety-conscious board, is preferable to that.
They came at the king and missed.
Yeah but if this is the case, I’d have liked to see a bit more balance than just retweeting the tribal-affiliation slogan (“OpenAI is nothing without its people”) and saying that the board should resign (or, in Ilya’s case, implying that he regrets and denounces everything he initially stood for together with the board). Like, I think it’s a defensible take to think that the board should resign after how things went down, but the board was probably pointing to some real concerns that won’t get addressed at all if the pendulum now swings way too much in the opposite direction, so I would have at least hoped for something like “the board should resign, but here are some things that I think they had a point about, which I’d like to see to not get shrugged under the carpet after the counter-revolution.”
It’s too late for a conditional surrender now that Microsoft is a credible threat to get 100% of OpenAI’s capabilities team; Ilya and Jan are communicating unconditional surrender because the alternative is even worse.
I’m not sure this is an unconditional surrender. They’re not talking about changing the charter, just appointing a new board. If the new board isn’t much less safety conscious, then a good bit of the organization’s original purpose and safeguards are preserved. So the terms of surrender would be negotiated in picking the new board.
AFAICT the only formal power the board has is in firing the CEO, so if we get a situation where whenever the board wants to fire Sam, Sam comes back and fires the board instead, well, it’s not exactly an inspiring story for OpenAI’s governance structure.
This is a very good point. It is strange, though, that the Board was able to fire Sam without the Chair agreeing to it. It seems like something as big as firing the CEO should have required at least a conversation with the Chair, if not the affirmative vote of the Chair. The way this was handled was a big mistake. There needs to be new rules in place to prevent big mistakes like this.
If actually enforcing the charter leads to them being immediately disempowered, it‘s not worth anything in the first place. We were already in the “worst case scenario”. Better to be honest about it. Then at least, the rest of the organisation doesn‘t get to keep pointing to the charter and the board as approving their actions when they don‘t.
The charter it is the board’s duty to enforce doesn‘t say anything about how the rest of the document doesn‘t count if investors and employees make dire enough threats, I‘m pretty sure.
If you pushed for fire sprinklers to be installed, then yell “FIRE”, and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don’t think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.
Keep in mind that the announcement was not something like
Instead, the board announced
That is corporate speak for “Sam Altman was a lying liar about something big enough to put the entire project at risk, and as such we need to cut ties with him immediately and also warn everyone who might work with him that he was a lying liar.” If you make accusations like that, and don’t back them up, I don’t think you get to be outraged that people start doubting your judgement.
The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board’s actions was not even ‘taking away your ability to trigger the fire sprinklers’ but ‘going off and living in a new building somewhere else that you can’t flood for lulz’.
As I’m understanding the situation OpenAI’s board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft. If they decide they would rather negotiate from their starting point of ‘being in charge of an empty building’ to ‘making concessions’ this doesn’t mean that the charter didn’t mean anything! It means that the charter gave them a bunch of power which they wasted.
If they thought this would be the outcome of firing Sam, they would not have done so.
The risk they took was calculated, but man, are they bad at politics.
I keep being confused by them not revealing their reasons. Whatever they are, there’s no way that saying them out loud wouldn’t give some ammo to those defending them, unless somehow between Friday and now they swung from “omg this is so serious we need to fire Altman NOW” to “oops looks like it was a nothingburger, we’ll look stupid if we say it out loud”. Do they think it’s a literal infohazard or something? Is it such a serious accusation that it would involve the police to state it out loud?
At this point I’m beginning to wonder if a gag order is involved.
Interesting! Bad at politics is a good way to put it. So you think this was purely a political power move to remove Sam, and they were so bad at projecting the outcomes, all of them thought Greg would stay on board as President and employees would largely accept the change.
No, I don’t think the board’s motives were power politics; I’m saying that they failed to account for the kind of political power moves that Sam would make in response.
It’s hard to know for sure, but I think this is a reasonable and potentially helpful perspective. Some of the perceived repercussions on the state of AI safety might be “the band-aid being ripped off”.
The important question is, why now? Why with so little evidence to back-up what is such an extreme action?