I agree, it’s critical to have a very close reading of “The board did *not* remove Sam over any specific disagreement on safety”.
This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word “specific”, then I can’t imagine why that word would have been inserted.
To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say “The board did not remove Sam over anything to do with safety”. His inserting of that qualifier implies that he couldn’t make a statement that broad, and therefore that safety considerations were involved in the removal.
According to Bloomberg, “Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing.”
Evidence that Shear simply wasn’t told the exact reason, though the “in writing” part is suspicious. Maybe he was told not in writing and wants them to write it down so they’re on the record.
I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far?
I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn’t weigh each word carefully when explaining the situation to a total outsider.
We still benefit from a very close reading, mind you; I just believe there’s a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.
The quote is from Emmett Shear, not a board member.
The board is also following the “don’t say anything literally false” policy by saying practically nothing publicly.
Just as I infer from Shear’s qualifier that the firing did have something to do with safety, I infer from the board’s public silence that their reason for the firing isn’t one that would win back the departing OpenAI members (or would only do so at a cost that’s not worth paying).
This is consistent with it being a safety concern shared by the superalignment team (who by and large didn’t sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That’s my current main hypothesis.
(or would only do so at a cost that’s not worth paying)
That’s the part that confuses me most. An NDA wouldn’t be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there’s criminal liability on both sides). I got nothing.
I agree, it’s critical to have a very close reading of “The board did *not* remove Sam over any specific disagreement on safety”.
This is the kind of situation where every qualifier in a statement needs to be understood as essential—if the statement were true without the word “specific”, then I can’t imagine why that word would have been inserted.
To elaborate on that, Shear is presumably saying exactly as much as he is allowed to say in public. This implies that if the removal had nothing to do with safety, then he would say “The board did not remove Sam over anything to do with safety”. His inserting of that qualifier implies that he couldn’t make a statement that broad, and therefore that safety considerations were involved in the removal.
According to Bloomberg, “Even CEO Shear has been left in the dark, according to people familiar with the matter. He has told people close to OpenAI that he doesn’t plan to stick around if the board can’t clearly communicate to him in writing its reasoning for Altman’s sudden firing.”
Evidence that Shear simply wasn’t told the exact reason, though the “in writing” part is suspicious. Maybe he was told not in writing and wants them to write it down so they’re on the record.
He was probably kinda sleep deprived and rushed, which could explain inessential words being added.
I would normally agree with this, except it does not seem to me like the board is particularly deliberate about their communication so far. If they are conscientious enough about their communication to craft it down to the word, why did they handle the whole affair in the way they seem to have so far?
I feel like a group of people who did not see fit to provide context or justifications to either their employees or largest shareholder when changing company leadership and board composition probably also wouldn’t weigh each word carefully when explaining the situation to a total outsider.
We still benefit from a very close reading, mind you; I just believe there’s a lot more wiggle room here than we would normally expect from corporate boards operating with legal advice based on the other information we have.
The quote is from Emmett Shear, not a board member.
The board is also following the “don’t say anything literally false” policy by saying practically nothing publicly.
Just as I infer from Shear’s qualifier that the firing did have something to do with safety, I infer from the board’s public silence that their reason for the firing isn’t one that would win back the departing OpenAI members (or would only do so at a cost that’s not worth paying).
This is consistent with it being a safety concern shared by the superalignment team (who by and large didn’t sign the statement at first) but not by the rest of OpenAI (who view pushing capabilities forward as a good thing, because like Sam they believe the EV of OpenAI building AGI is better than the EV of unilaterally stopping). That’s my current main hypothesis.
Ah, oops! My expectations are reversed for Shear; him I strongly expect to be as exact as humanly possible.
With that update, I’m inclined to agree with your hypothesis.
That’s the part that confuses me most. An NDA wouldn’t be strong enough reason at this point. As you say, safety concerns might, but that seems pretty wild unless they literally already have AGI and are fighting over what to do with it. The other thing is anything that if said out loud might involve the police, so revealing the info would be itself an escalation (and possibly mutually assured destruction, if there’s criminal liability on both sides). I got nothing.