Recently, OpenAI employees signed an open letter demanding that the board reinstate Sam Altman, add other board members (giving some names of people allied with Altman), and resign, or else they would quit and follow Altman to Microsoft.
Following those demands would’ve put the entire organization under the control of 1 person with no accountability to anyone. That doesn’t seem like what OpenAI employees wanted to be the case, unless they’re dumber than I thought. So, why did they sign? Here are some possible reasons that come to mind:
Altman is just really likeable for people like them—they just like him.
They felt a sense of injustice and outrage over the CEO being fired that they’d never felt over lower-level employees being fired.
They were hired or otherwise rewarded by Altman and thus loyal to him personally.
They believed Altman was more ideologically aligned with them than any likely replacement CEO (including Emmett Shear) would be.
They felt their profit shares would be worth more with Altman leading the company.
They were socially pressured by people with strong views from (3) or (4) or (5).
They were afraid the company would implode and they’d lose their job, and wanted the option of getting hired at a new group in Microsoft, and the risk of signing seemed low once enough other people already signed.
They were afraid Altman would return as CEO and fire or otherwise punish them if they hadn’t signed.
Something else?
Which of those reasons do you think drove people signing that letter, and why do you think so?
I have no data on OpenAI situation, but #8 has crossed my mind. (It reminded of the communist elections where the Party got 99% approval.) If Sam Altman returns—and if he is the kind of person some people describe him as—you do not want to be one of the few who didn’t sign the public letter calling for his return. That would be like putting your name on a public short list of people who don’t like the boss.
Of course, #5 is also likely. But notice that the entire point of having the board was to prevent the #5 reasoning to rule the company. Which means that ~all OpenAI employees oppose the OpenAI Charter. Which means that Sam Altman won the revolution (by strategically employing/keeping the kind of people who oppose the company Charter) long before the board even noticed that it started.
(I find it amusing that the document that people in communist Czechoslovakia were afraid not to sign publicly, so that they don’t lose their jobs, was called… Anticharter.)
It was striking seeing how many commenters and OA employees were quoting Toner quoting the OA Charter (which Sam Altman helped write & signed off on) as proof that she was an unhinged mindless zealot and proof that every negative accusation of the board was true.
It would be like the supermajority of Americans having never heard of the First Amendment and on hearing a president candidate say “the government should not abridge freedom of speech or the press”, all start railing about how ‘this is some libertarian moonbat trying to entryist the US government to impose their unprecedently extreme ideology about personal freedom, and obviously, totally unacceptable and unelectable. Not abridge speech?! When people abuse their freedom to say so many terrible things, sometimes even criticizing the government? You gotta be kidding—freedom of speech doesn’t mean freedom from consequences, like being punished by laws!’
Hard not to see the OA LLC as too fundamentally unaligned with the mission at that point. It seems like at some point, possibly years ago, OA LLC became basically a place that didn’t believe in the mission or that AGI risk is a thing and regarded all that stuff as so much PR kayfabe and not, like, serious (except for a few nuts over in the Superalignment group who thankfully can be ignored—after all, it’s not like the redteaming ever turns up any real problems, right? you’d’ve heard). At that point, the OA double-structure has failed. Double-structures like Hershey or Mozilla never pit the nonprofit against the for-profit to this extent, and double-structures like Ikea where it’s a tax gimmick, cannot. And it turns out, pitted that much, the for-profit holds most of the cards.
I don’t know how much to fault the board for this. They may well have known how much the employee base had diverged from the mission, but what were they going to do? Fire Altman back in 2020, before he could bring in all the people from Dropbox etc who then hired more like them & backed him, never mind the damage to the LLC? (I’m not sure they ever had the votes to do that for any reason, much less a slippery slope reason.) Leak to the press—the press that Altman has spent 15 years leaking to and building up favors with—to try to embarrass him out? (‘Lol. lmao. lel.’) Politely notify him that it was open war and he had 3 months to defeat them before being fired? Yeah...
Thus far, I don’t think there’s much of a post-mortem to this other than ‘like Arm China, at some point an entity is so misaligned that you can’t stop it from collectively walking out the door and simply ignoring you, no matter how many de jure rights or powers you supposedly have or how blatant the entity’s misalignment has become. And the only way to fix that is to not get into that situation to begin with’. But if you didn’t do that, then OA at this point would probably have accomplished a lot less in terms of both safety & capability, so the choice looked obvious ex ante.
The rules may be nice, but they are not going to enforce themselves.
Many communist countries had freedom of speech and freedom of religion in their constitutions. But those constitutions were never meant to be taken seriously, they were just PR documents for the naive Western journalists to quote from.
Citing a relevant part of the Lex Fridman interview (transcript) which people will probably find helpful to watch, so you can at least eyeball Altman’s facial expressions:
I think it’s also important to do three-body-problem thinking with this situation; it’s also possible that Microsoft or some other third party might have gradually but successfully orchestrated distrust/conflict between two good-guy factions or acquired access to the minds/culture of OpenAI employees, in which case it’s critical for the surviving good guys to mitigate the damage and maximize robustness against third parties in the future.
For example, Altman was misled to believe that the board was probably compromised and he had to throw everything at them, and the board was mislead to believe that Altman was hopelessly compromised and they had to throw everything at him (or maybe one of them was actually compromised). I actually wrote about that 5 days before the OpenAI conflict started (I’d call that a fun fact but not a suspicious coincidence, because things are going faster now, 5 days in 2023 is like 30 days in 2019 time).
Disclaimer: I do not work at OpenAI and have no inside knowledge of the situation.
I work in the finance industry. (Personal views are not those of my employer, etc, etc).
Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.
My boss and my boss’s boss held a meeting with the rest of the team on the day those people left, explaining what had happened, reassuring us that no further layoffs were planned, describing who would be taking over what parts of the responsibilities of the laid-off people, etc.
On my understanding of employment, this was just...sort of...the basic standard of professionalism and courtesy?
If I had found out about layoffs at my firm through media coverage, or when I tried to email a coworker and their email no longer worked, I would be unhappy. If the only communication I got from above about reasons for the layoffs was that destroying the company ‘would be consistent with the mission’, I would be very unhappy. In any of those cases, I would strongly consider looking for jobs elsewhere.
It has sometimes seemed to me that the EA/nonprofit space does not follow the rules I am familiar with for the employer/employee relationship. Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.
Nevertheless, any OpenAI employee with views similar to my own would be concerned and plausibly looking for a new job after the board fired the CEO with no justification or communication. If you want a one-sentence summary of the thought process, it could be:
‘If this is how they treat the CEO, how will they treat me?’
You just explained why it’s totally disanalogous. An ordinary employee is not a CEO {{citation needed}}.
This is true, but in general the differences between an ordinary employee and a CEO go in the CEO’s favor. I believe this does also extend to ‘how are they fired’: on my understanding the modal way a CEO is ‘fired’ is by announcing that they have chosen to retire to pursue other opportunities/spend more time with their family, and receiving a gigantic severance package.
I laughed out loud on this line...
...and then I wondered if you’ve seen Margin Call? It is truly a work of art.
My experiences are mostly in startups, but rarely on the actual founding team, so I have seen more stuff that was unbuffered by kind, diligent, “clueless” bosses.
My general impression is that “systems and processes” go a long way into creating smooth rides for the people at the bottom, but those things are not effectively in place (1) at the very beginning and (2) at the top when exceptional situations arise. Credentialed labor is generally better compensated in big organizations precisely because they have “systems” where people turn cranks reliably that reliably Make Number Go Up and then share out fractional amounts of “the number”.
Did you ever see or talk with them again? Did they get nice severance packages? Severance packages are the normal way for oligarchs to minimize expensive conflict, I think.
More or less all of it, I think.
Fundamentally, Sam Altman is a competent interpersonal operator. He’d doubtlessly worked both to be a naturally likeable and loyalty-inspiring person (in a passive way), and to purposefully inspire and select employees for loyalty (actively). That provided the backbone to the effort. No matter how many carrots and sticks were deployed, if Altman didn’t earn the loyalty to some extent, this show of support wouldn’t have been possible to achieve.
By contrast, the board apparently wasn’t very involved with the employees, and they did handle the communications terribly. Why would an OpenAI employee automatically assume they’re the good guys? (When even we are unsure.)
While loyalty to Altman might’ve varied, the employees for sure didn’t have any personal loyalty to the board.
And I’m sure there were carrots and sticks deployed aplenty:
On the carrots end: there could’ve been a ton of things, like Microsoft promising raises and guarantees if they jump ship (to make “we’d just quit otherwise” credible), Altman promising raises if he returns, arguments that they’d earn more in the long run if they either jump ship or get Altman back but not if they do nothing, etc.
Similarly, a ton of sticks: vivid images of the company imploding without Altman, and losing the investors, and of the board doing more random firings and wrecking things, plus covert suggestions of demotions or purges if people don’t support him and he comes back anyway, et cetera.
Which specific carrots and sticks were employed matters little, and likely differed from person to person to some extent. The point is just that there were a lot of things that could’ve sounded convincing with a right spin, and it was a relatively high-time-pressure situation, and the people spearheading the effort were (likely, apparently) good at making use of all of this.
The snowball effect/peer pressure obviously played a role. OpenAI employees are obviously different in how loyal/susceptible to the pressure they are, but as more and more people signed, the pressures would’ve mounted. First the 100 most loyal signed, then the 100 less-loyal ones (who wouldn’t have signed if the initiative didn’t already get some traction), then the 100 even-less-loyal ones, and so on.
If a random employee X were the first whom they asked to sign, that employee might or might not have refused. But if X is the seven-hundredth employee they’re asking, with 699 preceding ones having already signed, is X really going to make a stand? Yes, maybe! But that requires them to be against it on principle, such that they’re motivated to swim against the current.
And this effect could’ve been invoked even before the majority of the company signed – just by creating a narrative of the inevitability that of course we’re all gonna sign; that this is where the wind blows.
Lastly, I think signing the letter doesn’t actually commit an employee to anything. It’s not really legally binding? The cost of signing it is thus ~zero (except maybe some vague concerns about honor), whereas the supposed rewards for signing it and the punishments for not signing it (which, again, were doubtlessly floated around) are much more concrete.
And that point I’m outlining, in itself, is unlikely to be something the people organizing the effort had failed to empathize.
So there you have it: a relatively good boss is ousted by the board you know nothing about for unclear reasons, people close to the epicenter are running around telling you how it’s all going to implode now and how we have this costless way to maybe avert it, they’re being really pushy about it, it’s all very confusing and scary, more and more of the people around you are signing the letter, there’s an increasing atmosphere that signing it is just what an OpenAI employee does – would you really not sign?
Which isn’t to say it wasn’t an impressive accomplishment. The level of coordination required to pull this off was doubtlessly high, it would’ve required handling all of the aforementioned covert messaging about carrots-and-sticks with a minimal degree of competence, it required the foundation of Sam Altman establishing himself as a good leader, etc.
But I’m wholly unsurprised it worked.
It seemed like a classic case of prisoner’s dilemma, so (5) and (7). The more of your company that signs the petition, the lower the value of your PPUs, making it more attractive to sign. It reached a point where they felt OpenAI’s value and their PPUs went to nothing if a critical mass joined Microsoft. In fact, if MS was willing to match compensation, everyone “cooperating” by not signing the petition is a worse outcome for everyone than just joining MS because they had already seen other players move first (Altman, Brockman, other resignations) - that is if we look purely at compensation (not even taking into account the possibility that PPU-equivalent at MS would not be profit capped). In textbook prisoner’s dilemma, cooperation leads to the best overall outcome for everyone, yet the best move is to defect if you are unable to coordinate, which is not really the case here.
Further, even if an OAI employee did not care about PPUs at all, and all they care about is the non-profit mission of AI for the betterment of all humanity, they might have felt there was a greater likelihood of achieving that mission at Microsoft than the empty shell of OAI (the safety teams for example—might as well do you best to help safety at the new “leading” organisation, and get paid too).
Not sure if this page is broken or I’m technically inept, but I can’t figure out how to reply to qualiia’s comment directly:
Primarily #5 and #7 was my gut reaction, but quailia’s post articulates rationale better than I could.
One useful piece of information that would influence my weights: what was OAI’s general hiring criteria? If they sought solely “best and brightest” on technical skills and enticed talent primarily with premiere pay packages, I’d lean #5 harder. If they sought cultural/mission fits in some meaningful way I might update lower on #5/7 and higher on others. I read the external blog post about the bulk of OAI compensation being in PPUs, but that’s not necessarily incompatible with mission fit.
Well done on the list overall, seems pretty complete, though aphyer provides a good unique reason (albeit adjacent to #2).
Suppose you’re an engineer at SpaceX. You’ve always loved rockets, and Elon Musk seems like the guy who’s getting them built. You go to work on Saturdays, you sometimes spend ten hours at the office, you watch the rockets take off and you watch the rockets land intact and that makes everything worth it.
Now imagine that Musk gets in trouble with the government. Let’s say the Securities and Exchange Commission charges him with fraud again, and this time they’re *really* going after him, not just letting him go with a slap on the wrist like the first time. SpaceX’s board of directors negotiates with SEC prosecutors. When they emerge they fire Musk from SpaceX, and remove Elon and Kimbal Musk from the board. They appoint Gwynne Shotwell as the new CEO.
You’re pretty worried! You like Shotwell, sure, but Musk’s charisma and his intangible magic have been very important to the company’s success so far. You’re not sure what will happen to the company without him. Will you still be making revolutionary new rockets in five years, or will the company regress to the mean like Boeing? You talk to some colleagues, and they’re afraid and angry. No one knows what’s happening. Alice says that the company would be nothing without Musk and rails at the board for betraying him. Bob says the government has been going after Musk on trumped-up charges for a while, and now they finally got him. Rumor has it that Musk is planning to start a new rocket company.
Then Shotwell resigns in protest. She signs an open letter calling for Musk’s reinstatement and the resignation of the board. Board member Luke Nosek signs it too, and says his earlier vote to fire Musk was a huge mistake.
You get a Slack message from Alice saying that she’s signed the letter because she has faith in Musk and wants to work at his company, whichever company that is, in order to make humanity a multiplanetary species. She asks if you want to sign.
How do you feel?
Replying to David Hornbein.
Thank you for this comment, this was basically my view as well. I think the employees of OpenAI are simply excited about AGI, have committed their lives working long hours to make it a reality and believe AGI would be good for humanity and also good for them personally. My view is that they are very emotionally invested in building AGI and stopping all that progress for reasons that feel speculative, theoretical and not very tangible feels painful.
Not that I would agree with that, assuming this is correct.
>Now imagine that Musk gets in trouble with the government
Now image the same scenario but Elon has not gotten in trouble with the government and multiple people (including those who fired him) have affirmed he did nothing wrong.
I have no inside information. My guess is #5 with a side of 1, 6, and “the letter wasn’t legally binding anyway so who cares.”
I think that the lesson here is that if your company says “Work here for the principles in this charter. We also pay a shitload of money” then you are going to get a lot of employees who like getting paid a shitload of money regardless of the charter, because those are much more common in the population than people who believe the principles in the charter and don’t care about money.
Someone in your company gets fired by a boss you don’t know/particularly like without giving any reason
You are mad with the boss and want the decision overturned
You have a credible, attractive BATNA (the Microsoft offer)
These 3 items seem like they would be sufficient to cause something like the Open Letter to happen.
In most cases number 3 is not present which I think is why we don’t see things like this happen more often in more organisations.
None of this requires Sam to be hugely likeable or a particularly savvy political operator, just that people generally like him. People seem to suggest he was one or both so this just makes the letter more likely.
I’m sure this doesn’t explain it all in OpenAI’s case—some/many employees would also have been worried about AI safety which complicates the decision—but I suspect it is the underlying story.
I think #5+#6. The people with the most stock tend to be the bosses of the others — the “social pressure” of your boss telling you to sign right now is quite persuasive.