As for OpenAI dropping the mask: I devoted essentially zero effort to predicting this, though my complete lack of surprise implies it is consistent with the information I already had. Even so:
Open Philanthropy (OP) only had that board seat and made a donation because Altman invited them to, and he could personally have covered the $30m or whatever OP donated for the seat [...] He thought up, drafted, and oversaw the entire for-profit thing in the first place, including all provisions related to board control. He voted for all the board members, filling it back up from when it was just him (& Greg Brockman at one point IIRC). He then oversaw and drafted all of the contracts with MS and others, while running the for-profit and eschewing equity in the for-profit. He designed the board to be able to fire the CEO because, to quote him, “the board should be able to fire me”. [...]
Credit where credit is due—Altman may not have believed the scaling hypothesis like Dario Amodei, may not have invented PPO like John Schulman, may not have worked on DL from the start like Ilya Sutskever, may not have created GPT like Alec Radford, may not have written & optimized any code like Brockman’s—but the 2023 OA organization is fundamentally his work.
The question isn’t, “how could EAers* have ever let Altman take over OA and possibly kick them out”, but entirely the opposite: “how did EAers ever get any control of OA, such that they could even possibly kick out Altman?” Why was this even a thing given that OA was, to such an extent, an Altman creation?
The answer is: “because he gave it to them.” Altman freely and voluntarily handed it over to them.
So you have an answer right there to why the Board was willing to assume Altman’s good faith for so long, despite everyone clamoring to explain how (in hindsight) it was so obvious that the Board should always have been at war with Altman and regarding him as an evil schemer out to get them. But that’s an insane way for them to think! Why would he undermine the Board or try to take it over, when he was the Board at one point, and when he made and designed it in the first place? Why would he be money-hungry when he refused all the equity that he could so easily have taken—and in fact, various partner organizations wanted him to have in order to ensure he had ‘skin in the game’? Why would he go out of his way to make the double non-profit with such onerous & unprecedented terms for any investors, which caused a lot of difficulties in getting investment and Microsoft had to think seriously about, if he just didn’t genuinely care or believe any of that? Why any of this?
(None of that was a requirement, or even that useful to OA for-profit. [...] Certainly, if all of this was for PR reasons or some insidious decade-long scheme of Altman to ‘greenwash’ OA, it was a spectacular failure—nothing has occasioned more confusion and bad PR for OA than the double structure or capped-profit. [...]
What happened is, broadly: ‘Altman made the OA non/for-profits and gifted most of it to EA with the best of intentions, but then it went so well & was going to make so much money that he had giver’s remorse, changed his mind, and tried to quietly take it back; but he had to do it by hook or by crook, because the legal terms said clearly “no takesie backsies”’. Altman was all for EA and AI safety and an all-powerful nonprofit board being able to fire him, and was sincere about all that, until OA & the scaling hypothesis succeeded beyond his wildest dreams, and he discovered it was inconvenient for him and convinced himself that the noble mission now required him to be in absolute control, never mind what restraints on himself he set up years ago—he now understands how well-intentioned but misguided he was and how he should have trusted himself more. (Insert Garfield meme here.)
No wonder the board found it hard to believe! No wonder it took so long to realize Altman had flipped on them, and it seemed Sutskever needed Slack screenshots showing Altman blatantly lying to them about Toner before he finally, reluctantly, flipped. The Altman you need to distrust & assume bad faith of & need to be paranoid about stealing your power is also usually an Altman who never gave you any power in the first place! I’m still kinda baffled by it, personally.
He concealed this change of heart from everyone, including the board, gradually began trying to unwind it, overplayed his hand at one point—and here we are.
It is still a mystery to me what is Sam’s motive exactly.
As for OpenAI dropping the mask: I devoted essentially zero effort to predicting this, though my complete lack of surprise implies it is consistent with the information I already had. Even so:
Shit.
@gwern wrote am explanation why this is surprising (for some) [here](https://forum.effectivealtruism.org/posts/Mo7qnNZA7j4xgyJXq/sam-altman-open-ai-discussion-thread?commentId=CAfNAjLo6Fy3eDwH3)
It is still a mystery to me what is Sam’s motive exactly.