I have no data on OpenAI situation, but #8 has crossed my mind. (It reminded of the communist elections where the Party got 99% approval.) If Sam Altman returns—and if he is the kind of person some people describe him as—you do not want to be one of the few who didn’t sign the public letter calling for his return. That would be like putting your name on a public short list of people who don’t like the boss.
Of course, #5 is also likely. But notice that the entire point of having the board was to prevent the #5 reasoning to rule the company. Which means that ~all OpenAI employees oppose the OpenAI Charter. Which means that Sam Altman won the revolution (by strategically employing/keeping the kind of people who oppose the company Charter) long before the board even noticed that it started.
(I find it amusing that the document that people in communist Czechoslovakia were afraid not to sign publicly, so that they don’t lose their jobs, was called… Anticharter.)
Which means that ~all OpenAI employees oppose the OpenAI Charter.
It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.
It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say “the government should not abridge freedom of speech or the press”, all start railing about how ‘this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding—freedom of speech doesn’t mean freedom from consequences, like being punished by laws!’
Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn’t believe in the mission or that AGI risk is a thing and regarded all that stuff as so much PR kayfabe and not, like, serious (except for a few nuts over in the Superalignment group who thankfully can be ignored—after all, it’s not like the redteaming ever turns up any real problems, right? you’d’ve heard). At that point, the OA double-structure has failed. Double-structures like Hershey or Mozilla never pit the nonprofit against the for-profit to this extent, and double-structures like Ikea where it’s a tax gimmick, cannot. And it turns out, pitted that much, the for-profit holds most of the cards.
I don’t know how much to fault the board for this. They may well have known how much the employee base had diverged from the mission, but what were they going to do? Fire Altman back in 2020, before he could bring in all the people from Dropbox etc who then hired more like them & backed him, never mind the damage to the LLC? (I’m not sure they ever had the votes to do that for any reason, much less a slippery slope reason.) Leak to the press—the press that Altman has spent 15 years leaking to and building up favors with—to try to embarrass him out? (‘Lol. lmao. lel.’) Politely notify him that it was open war and he had 3 months to defeat them before being fired? Yeah...
Thus far, I don’t think there’s much of a post-mortem to this other than ‘like Arm China, at some point an entity is so misaligned that you can’t stop it from collectively walking out the door and simply ignoring you, no matter how many de jure rights or powers you supposedly have or how blatant the entity’s misalignment has become. And the only way to fix that is to not get into that situation to begin with’. But if you didn’t do that, then OA at this point would probably have accomplished a lot less in terms of both safety & capability, so the choice looked obvious ex ante.
The rules may be nice, but they are not going to enforce themselves.
Many communist countries had freedom of speech and freedom of religion in their constitutions. But those constitutions were never meant to be taken seriously, they were just PR documents for the naive Western journalists to quote from.
Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman’s facial expressions:
LEX FRIDMAN: How do you hire? How do you hire great teams? The folks I’ve interacted with, some of the most amazing folks I’ve ever met.
SAM ALTMAN: It takes a lot of time. I mean, I think a lot of people claim to spend a third of their time hiring. I for real truly do. I still approve every single hire at OpenAI. And I think we’re working on a problem that is like very cool and that great people want to work on. We have great people and people want to be around them. But even with that, I think there’s just no shortcut for putting a ton of effort into this.
For example, Altman was misled to believe that the board was probably compromised and he had to throw everything at them, and the board was mislead to believe that Altman was hopelessly compromised and they had to throw everything at him (or maybe one of them was actually compromised). I actually wrote about that 5 days before the OpenAI conflict started (I’d call that a fun fact but not a suspicious coincidence, because things are going faster now, 5 days in 2023 is like 30 days in 2019 time).
I have no data on OpenAI situation, but #8 has crossed my mind. (It reminded of the communist elections where the Party got 99% approval.) If Sam Altman returns—and if he is the kind of person some people describe him as—you do not want to be one of the few who didn’t sign the public letter calling for his return. That would be like putting your name on a public short list of people who don’t like the boss.
Of course, #5 is also likely. But notice that the entire point of having the board was to prevent the #5 reasoning to rule the company. Which means that ~all OpenAI employees oppose the OpenAI Charter. Which means that Sam Altman won the revolution (by strategically employing/keeping the kind of people who oppose the company Charter) long before the board even noticed that it started.
(I find it amusing that the document that people in communist Czechoslovakia were afraid not to sign publicly, so that they don’t lose their jobs, was called… Anticharter.)
It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.
It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say “the government should not abridge freedom of speech or the press”, all start railing about how ‘this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding—freedom of speech doesn’t mean freedom from consequences, like being punished by laws!’
Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn’t believe in the mission or that AGI risk is a thing and regarded all that stuff as so much PR kayfabe and not, like, serious (except for a few nuts over in the Superalignment group who thankfully can be ignored—after all, it’s not like the redteaming ever turns up any real problems, right? you’d’ve heard). At that point, the OA double-structure has failed. Double-structures like Hershey or Mozilla never pit the nonprofit against the for-profit to this extent, and double-structures like Ikea where it’s a tax gimmick, cannot. And it turns out, pitted that much, the for-profit holds most of the cards.
I don’t know how much to fault the board for this. They may well have known how much the employee base had diverged from the mission, but what were they going to do? Fire Altman back in 2020, before he could bring in all the people from Dropbox etc who then hired more like them & backed him, never mind the damage to the LLC? (I’m not sure they ever had the votes to do that for any reason, much less a slippery slope reason.) Leak to the press—the press that Altman has spent 15 years leaking to and building up favors with—to try to embarrass him out? (‘Lol. lmao. lel.’) Politely notify him that it was open war and he had 3 months to defeat them before being fired? Yeah...
Thus far, I don’t think there’s much of a post-mortem to this other than ‘like Arm China, at some point an entity is so misaligned that you can’t stop it from collectively walking out the door and simply ignoring you, no matter how many de jure rights or powers you supposedly have or how blatant the entity’s misalignment has become. And the only way to fix that is to not get into that situation to begin with’. But if you didn’t do that, then OA at this point would probably have accomplished a lot less in terms of both safety & capability, so the choice looked obvious ex ante.
The rules may be nice, but they are not going to enforce themselves.
Many communist countries had freedom of speech and freedom of religion in their constitutions. But those constitutions were never meant to be taken seriously, they were just PR documents for the naive Western journalists to quote from.
Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman’s facial expressions:
I think it’s also important to do three-body-problem thinking with this situation; it’s also possible that Microsoft or some other third party might have gradually but successfully orchestrated distrust/conflict between two good-guy factions or acquired access to the minds/culture of OpenAI employees, in which case it’s critical for the surviving good guys to mitigate the damage and maximize robustness against third parties in the future.
For example, Altman was misled to believe that the board was probably compromised and he had to throw everything at them, and the board was mislead to believe that Altman was hopelessly compromised and they had to throw everything at him (or maybe one of them was actually compromised). I actually wrote about that 5 days before the OpenAI conflict started (I’d call that a fun fact but not a suspicious coincidence, because things are going faster now, 5 days in 2023 is like 30 days in 2019 time).