Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece “2023 CEO of the Year: Sam Altman” (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie—cute coyote photos!), it actually contains a fair bit of leaking I haven’t seen anywhere else. Most strikingly:
claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:
The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture.
(The wording here strongly implies it was not Sutskever.) This of course greatly undermines the “incompetent Board” narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn’t inform Altman ahead of time (was she one of those execs...?), and casts further doubt on the ~100% signature rate of the famous OA employee letter.
Now that it’s safe(r) to say negative things about Altman, because it has become common knowledge that he was fired from Y Combinator and there is an independent investigation planned at OA, it seems that more of these incidents have been coming to light.
confirms my earlier interpretation that at least one of the dishonesties was specifically lying to a board member that another member wanted to immediately fire Toner to manipulate them into her ouster:
One example came in late October, when an academic paper Toner wrote in her capacity at Georgetown was published. Altman saw it as critical of OpenAI’s safety efforts and sought to push Toner off the board. Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.
This episode did not spur the board’s decision to fire Altman, those people say, but it was representative of the ways in which he tried to undermine good governance, and was one of several incidents that convinced the quartet that they could not carry out their duty of supervising OpenAI’s mission if they could not trust Altman. Once the directors reached the decision, they felt it was necessary to act fast, worried Altman would detect that something was amiss and begin marshaling support or trying to undermine their credibility. “As soon as he had an inkling that this might be remotely on the table,” another of the people familiar with the board’s discussions says, “he would bring the full force of his skills and abilities to bear.”
EDIT: WSJ (excerpts) is also reporting this, by way of a Helen Toner interview (which doesn’t say much on the record, but does provide context for why she said that quote everyone used as a club against her: an OA lawyer lied to her about her ‘fiduciary duties’ while threatening to sue & bankrupt, and she got mad and pointed out that even outright destroying OA would be consistent with the mission & charter so she definitely didn’t have any ‘fiduciary duty’ to maximize OA profits).
Unfortunately, the Time article, while seeming to downplay how much the Toner incident mattered by saying it didn’t “spur” the decision, doesn’t explain what did spur it, nor refer to the Sutskever Slack discussion AFAICT. So I continue to maintain that Altman was moving to remove Toner so urgently in order to hijack the board, and that this attempt was one of the major concerns, and his deception around Toner’s removal, and particularly the executives discussing the EA purge, was probably the final proximate cause which was concrete enough & came with enough receipts to remove whatever doubt they had left (whether that was “the straw that broke the camel’s back” or “the smoking gun”).
continues to undermine ‘Q* truthers’ by not even mentioning it (except possibly a passing reference by Altman at the end to “doubling down on certain research areas”)
The article does provide good color and other details I won’t try to excerpt in full (although some are intriguing—where, exactly, was this feedback to Altman about him being dishonest in order to please people?), eg:
...Altman, 38, has been Silicon Valley royalty for a decade, a superstar founder with immaculate vibes...Interviews with more than 20 people in Altman’s circle—including current and former OpenAI employees, multiple senior executives, and others who have worked closely with him over the years—reveal a complicated portrait. Those who know him describe Altman as affable, brilliant, uncommonly driven, and gifted at rallying investors and researchers alike around his vision of creating artificial general intelligence (AGI) for the benefit of society as a whole. But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. [see also Joshua Achiam’s defense of Altman] These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.” An OpenAI spokesperson said the company could not comment on the events surrounding Altman’s firing. “We’re unable to disclose specific details until the board’s independent review is complete. We look forward to the findings of the review and continue to stand behind Sam,” the spokesperson said in a statement to TIME. “Our primary focus remains on developing and releasing useful and safe AI, and supporting the new board as they work to make improvements to our governance structure.”
If you’ve noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon, it’s because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.
This fall, a small number of senior leaders approached the board of OpenAI with concerns about chief executive Sam Altman.
Altman—a revered mentor, prodigious start-up investor and avatar of the AI revolution—had been psychologically abusive, the employees said, creating pockets of chaos and delays at the artificial-intelligence start-up, according to two people familiar with the board’s thinking who spoke on the condition of anonymity to discuss sensitive internal matters.
The company leaders, a group that included key figures and people who manage large teams, mentioned Altman’s allegedly pitting employees against each other in unhealthy ways, the people said. [The executives approaching the board were previously published in Time/WSJ, and the chaos hinted at in The Atlantic but this appears to add some more detail.]
...these complaints echoed their [the board’s] interactions with Altman over the years, and they had already been debating the board’s ability to hold the CEO accountable.
Several board members thought Altman had lied to them, for example, as part of a campaign to remove board member Helen Toner after she published a paper criticizing OpenAI, the people said.
The new complaints triggered a review of Altman’s conduct during which the board weighed the devotion Altman had cultivated among factions of the company against the risk that OpenAI could lose key leaders who found interacting with him highly toxic.
They also considered reports from several employees who said they feared retaliation from Altman: One told the board that Altman was hostile after the employee shared critical feedback with the CEO and that he undermined the employee on that person’s team, the people said.
The complaints about Altman’s alleged behavior, which have not previously been reported, were a major factor in the board’s abrupt decision to fire Altman on November 17.
Initially cast as a clash over the safe development of artificial intelligence, Altman’s firing was at least partially motivated by the sense that his behavior would make it impossible for the board to oversee the CEO.
(I continue to speculate Superalignment was involved, if only due to their enormous promised compute-quota and small headcount, but the wording here seems like it involved more than just a single team or group, and also points back to some of the earlier reporting and the other open letter, so there may be many more incidents than appreciated.)
another reporting of internal OA complaints about Altman’s manipulative/divisive behavior, see previously on HN
previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D’Angelo.
Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.
Sutskever had threatened to quit after Jakub Pachocki’s promotion; previous reporting had said he was upset about it, but hadn’t hinted at him being so angry as to threaten to quit OA
Sutskever doesn’t seem to be too rank/promotion-hungry (why would he be? he is, to quote one article ‘a god of AI’ and is now one of the most-cited researchers ever) and one would think it would take a lot for him to threaten to quit OA… Coverage thus far seems to be content to take the attitude that he must be some sort of 5-year-old child throwing a temper tantrum over a slight, but I find this explanation inadequate.
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas. (Permanently subverting or killing safety research does seem like adequate grounds for Sutskever to deliver an ultimatum, for the same reasons that the Board killing OA can be the best of several bad options.)
Altman was ‘bad-mouthing the board to OpenAI executives’; this likely refers to the Slack conversation Sutskever was involved in reported by WSJ a while ago about how they needed to purge everyone EA-connected
Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up. (I believe this because it is unflattering to Altman.)
the OA outside lawyer told them they needed to clam up and not do PR like the Altman faction was
both sides are positioning themselves for the independent report overseen by Summers as the ‘broker’; hence, Altman/Conway leaking the texts quoted at the end posturing about how ‘the board wants silence’ (not that one could tell from the post-restoration leaking & reporting...) and how his name needs to be cleared.
Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman
One additional thing I’d note: the NYT mentions a quotes from Whatsapp channel that contained hundreds of top SV execs & VCs. This is the sort of thing that you always suspected existed given how coordinated some things seem to be, but this is a striking confirmation of existence. It is also, given the past history of SV like the big wage-fixing scandals of Steve Jobs et al, something I expect contains a lot of statements that they really would not want a prosecutor seeing. One wonders how prudent they have been about covering up message history, and complying with SEC & other regulatory rules about destruction of logs, and if some subpoenas are already winging their way out?
This article confirms—among other things—what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.
Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the ‘senior employees’ to (twice!) appeal to the board to fire Altman. You know who won. To quote one of his defenders:
“If he imagines something to be true, it sort of becomes true in his head,” said Mark Jacobstein, co-founder of Jimini Health who served as Loopt’s chief operating officer. “That is an extraordinary trait for entrepreneurs who want to do super ambitious things. It may or may not lead one to stretch, and that can make people uncomfortable.”
Sequoia Capital: the journalists also shed light on the Loopt acquisition. There have long been rumors about the Loopt acquisition by Green Dot being shady (also covered in that Vox article), especially as Loopt didn’t seem to go anywhere under Green Dot so it hardly looked like a great or natural acquisition—but it was unclear how and the discussions seemed to guess that Altman had sold Loopt in a way which made him a lot of money but shafted investors. But it seems that what actually happened was that, again on the side of his Loopt day-job, Altman was doing freelance VC work for Sequoia Capital, and was responsible for getting them into one of the most lucrative startup rounds ever, Stripe. Sequoia then ‘helped engineer an acquisition by another Sequoia-backed company’, Green Dot.
The journalists don’t say this, but the implication here is that Loopt’s acquisition was a highly-deniable kickback to Altman from Sequoia for Stripe & others.
Greg Brockman: also Stripe-related, Brockman’s apparently intense personal loyalty to Altman may stem from this period, where Altman apparently did Brockman a big favor by helping broker the sale of his Stripe shares.
YC firing: some additional details like Jessica Livingston instigating it, one grievance being his hypocrisy over banning outside funds for YC partners (other than him), and also a clearcut lie by Altman: he posted the YC announcement blog post saying he had been moved to YC Chairman… but YC had not, and never did, agreed to that. So that’s why the YC announcements kept getting edited—he’d tried to hustle them into appointing him Chairman to save his face.
To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.
Nice try, but no cigar. (This is something to keep in mind given my earlier comments about Altman talking about his pride in creating a mature executive team etc—if, after the report is done, he stops being CEO and becomes OA board chairman, that means he’s been kicked out of OA.)
Ilya Sutskever: as mentioned above, I felt that we did not have the full picture why Sutskever was so angered by Jakub Pachocki’s promotion. This answers it! Sutskever was angry because he has watched Altman long enough to understand what the promotion meant:
In early fall this year, Ilya Sutskever, also a board member, was upset because Altman had elevated another AI researcher, Jakub Pachocki, to director of research, according to people familiar with the matter. Sutskever told his board colleagues that the episode reflected a long-running pattern of Altman’s tendency to pit employees against one another or promise resources and responsibilities to two different executives at the same time, yielding conflicts, according to people familiar with the matter…Altman has said he runs OpenAI in a “dynamic” fashion, at times giving people temporary leadership roles and later hiring others for the job. He also reallocates computing resources between teams with little warning, according to people familiar with the matter. [cf. Atlantic, WaPo, the anonymous letter]
Ilya recognized the pattern perhaps in part because he has receipts:
In early October, OpenAI’s chief scientist approached some fellow board members to recommend Altman be fired, citing roughly 20 examples of when he believed Altman misled OpenAI executives over the years. That set off weeks of closed-door talks, ending with Altman’s surprise ouster days before Thanksgiving.
Speaking of receipts, the law firm for the independent report has been chosen: WilmerHale. Unclear if they are investigating yet, but I continue to doubt that it will be done before the tender closes early next month.
the level of sourcing indicates Altman’s halo is severely damaged (“This article is based on interviews with dozens of executives, engineers, current and former employees and friend’s of Altman’s, as well as investors.”). Before, all of this was hidden; as the article notes of the YC firing:
For years, even some of Altman’s closest associates—including Peter Thiel, Altman’s first backer for Hydrazine—didn’t know the circumstances behind Altman’s departure.
If even Altman’s mentor didn’t know, no wonder no one else seems to have known—aside from those directly involved in the firing, like, for example, YC board member Emmett Shear. But now it’s all on the record, with even Graham & Livingston acknowledging the firing (albeit quibbling a little: come on, Graham, if you ‘agree to leave immediately’, that’s still ‘being fired’).
Tash McCauley’s role finally emerges a little more: she had been trying to talk to OA executives without Altman’s presence, and Altman demanded to be informed of any Board communication with employees. It’s unclear if he got his way.
So, a mix of confirmation and minor details continuing to flesh out the overall saga of Sam Altman as someone who excels at finance, corporate knife-fighting, & covering up manipulation but who is not actually that good at managing or running a company (reminiscent of Xi Jinping), and a few surprises for me.
On a minor level, if McCauley had been trying to talk to employees, then it’s more likely that she was the one that the whistleblowers like Nathan Labenz had been talking to rather than Helen Toner; Toner might have been just the weakest link in her public writings providing a handy excuse. (...Something something 5 lines by the most honest of men...) On a more important level, if Sutskever has a list of 20 documented instances (!) of Altman lying to OA executives (and the Board?), then the Slack discussion may not have been so important after all, and Altman may have good reason to worry—he keeps saying he doesn’t recall any of these unfortunate episodes, and it is hard to defend yourself if you can no longer remember what might turn up...
An OA update: it’s been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep.)
To recap, because I believe I haven’t been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone’s relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close—and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it’s both irrelevant & unimportant.)
The main theme of the article is clarifying Murati’s role: as I speculated, she was in fact telling the Board about Altman’s behavior patterns, and it fills in that she had gone further and written it up in a memo to him, and even threatened to leave with Sutskever.
But it reveals a number of other important claims: the investigation is basically done and wrapping up. The new board apparently has been chosen. Sutskever’s lawyer has gone on the record stating that Sutskever did not approach the board about Altman (?!). And it reveals the board confronted Altman over his ownership of the OA VC Fund (in addition to all his many other compromises of interest**) imply.
So, what does that mean?
First, as always, in a war of leaks, cui bono? Who is leaking this to the NYT? Well, it’s not the pro-Altman faction: they are at war with the NYT, and these leaks do them no good whatsoever. It’s not the lawyers: these are high-powered elite lawyers, hired for confidentiality and discretion. It’s not Murati or Sutskever, given their lack of motive, and the former’s panicked internal note & Sutskever’s lawyer’s denial. Of the current interim board (which is about to finish its job and leave, handing it over to the expanded replacement board), probably not Larry Summers/Brett Taylor—they were brought on to oversee the report as neutral third party arbitrators, and if they (a simple majority of the temporary board) want something in their report, no one can stop them from putting it there. It could be Adam D’Angelo or the ex-board: they are the ones who don’t control the report, and they also already have access to all of the newly-leaked-but-old information about Murati & Sutskever & the VC Fund.
So, it’s the anti-Altman faction, associated with the old board. What does that mean?
I think that what this leak indirectly reveals is simple: Sam Altman has won. The investigation will exonerate him, and it is probably true that it was so narrowly scoped from the beginning that it was never going to plausibly provide grounds for his ouster. What these leaks are, are a loser’s spoiler move: the last gasps of the anti-Altman faction, reduced to leaking bits from the final report to friendly media (Metz/NYT) to annoy Altman, and strike first. They got some snippets out before the Altman faction shops around highly selective excerpts to their own friendly media outlets (the usual suspects—The Information, Semafor, Kara Swisher) from the final officialized report to set the official record (at which point the rest of the confidential report is sent down the memory hole). Welp, it’s been an interesting few months, but l’affaire Altman is over. RIP.
Evidence, aside from simply asking who benefits from these particular leaks at the last minute, is that Sutskever remains in hiding & his lawyer is implausibly denying he had anything to do with it, while if you read Altman on social media, you’ll notice that he’s become ever more talkative since December, particularly in the last few weeks—glorying in the instant memeification of ‘$7 trillion’ - as has OA PR* and we have heard no more rhetoric about what an amazing team of execs OA has and how he’s so proud to have tutored them to replace him. Because there will be no need to replace him now. The only major reasons he will have to leave is if it’s necessary as a stepping stone to something even higher (eg. running the $7t chip fab consortium, running for US President) or something like a health issue.
So, upshot: I speculate that the report will exonerate Altman (although it can’t restore his halo, as it cannot & will not address things like his firing from YC which have been forced out into public light by this whole affair) and he will be staying as CEO and may be returning to the expanded board; the board will probably include some weak uncommitted token outsiders for their diversity and independence, but have an Altman plurality and we will see gradual selective attrition/replacement in favor of Altman loyalists until he has a secure majority robust to at least 1 flip and preferably 2. Having retaken irrevocable control of OA, further EA purges should be unnecessary, and Altman will probably refocus on the other major weakness exposed by the coup: the fact that his frenemy MS controls OA’s lifeblood. (The fact that MS was such a potent weapon for Altman in the fight is a feature while he’s outside the building, but a severe bug once he’s back inside.) People are laughing at the ‘$7 trillion’. But Altman isn’t laughing. Those GPUs are life and death for OA now. And why should he believe he can’t do it? Things have always worked out for him before...
Predictions, if being a bit more quantitative will help clarify my speculations here: Altman will still be CEO of OA on June 1st (85%); the new OA board will include Altman (60%); Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati’s desperate-sounding internal note); the full unexpurgated non-summary report will not be released (85%, may be hard to judge because it’d be easy to lie about); serious chip fab/Tigris efforts will continue (75%); Microsoft’s observer seat will be upgraded to a voting seat (25%).
* Eric Newcomer (usually a bit more acute than this) asks “One thing that I find weird: OpenAI comms is giving very pro Altman statements when the board/WilmerHale are still conducting the investigation. Isn’t communications supposed to work for the company, not just the CEO? The board is in charge here still, no?” NARRATOR: “The board is not in charge still.”
** Compare the current OA PR statement on the VC Fund to Altman’s past position on, say, Helen Toner or Reid Hoffman or Shivon Zilis, or Altman’s investment in chip startups touting letters of commitment from OA or his ongoing Hydrazine investment in OA which sadly, he has never quite had the time to dispose of in any of the OA tender offers. As usual, CoIs only apply to people Altman doesn’t trust—“for my friends, everything; for my enemies, the law”.
Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati’s desperate-sounding internal note)
Mira Murati announced today she is resigning from OA. (I have also, incidentally, won a $1k bet with an AI researcher on this prediction.)
See my earlier comments on 23 June 2024 about what ‘OA rot’ would look like; I do not see any revisions necessary given the past 3 months.
As for Murati finally leaving (perhaps she was delayed by the voice shipping delays), I don’t think it matters too much as far as I could tell (not like Sutskever or Brockman leaving), she was competent but not critical; probably the bigger deal is that her leaving is apparently a big surprise to a lot of OAers (maybe I should’ve taken more bets?), and so will come as a blow to morale and remind people of last year’s events.
...When Mira [Murati] informed me this morning that she was leaving, I was saddened but of course support her decision. For the past year, she has been building out a strong bench of leaders that will continue our progress.
I also want to share that Bob [McGrew] and Barret [Zoph] have decided to depart OpenAI. Mira, Bob, and Barret made these decisions independently of each other and amicably, but the timing of Mira’s decision was such that it made sense to now do this all at once, so that we can work together for a smooth handover to the next generation of leadership.
...Mark [Chen] is going to be our new SVP of Research and will now lead the research org in partnership with Jakub [Pachocki] as Chief Scientist. This has been our long-term succession plan for Bob someday; although it’s happening sooner than we thought, I couldn’t be more excited that Mark is stepping into the role. Mark obviously has deep technical expertise, but he has also learned how to be a leader and manager in a very impressive way over the past few years.
Josh[ua] Achiam is going to take on a new role as Head of Mission Alignment, working across the company to ensure that we get all pieces (and culture) right to be in a place to succeed at the mission.
...Mark, Jakub, Kevin, Srinivas, Matt, and Josh will report to me. I have over the past year or so spent most of my time on the non-technical parts of our organization; I am now looking forward to spending most of my time on the technical and product parts of the company.
...Leadership changes are a natural part of companies, especially companies that grow so quickly and are so demanding. I obviously won’t pretend it’s natural for this one to be so abrupt, but we are not a normal company, and I think the reasons Mira explained to me (there is never a good time, anything not abrupt would have leaked, and she wanted to do this while OpenAI was in an upswing) make sense.
(I wish Dr Achiam much luck in his new position at Hogwarts.)
It does not actually make any sense to me that Mira wanted to prevent leaks, and therefore didn’t even tell Sam that she was leaving ahead of time. What would she be afraid of, that Sam would leak the fact that she was planning to leave… for what benefit?
Possibilities:
She was being squeezed out, or otherwise knew her time was up, and didn’t feel inclined to make it a maximally comfortable parting for OpenAI. She was willing to eat the cost of her own equity potentially losing a bunch of value if this derailed the ongoing investment round, as well as the reputational cost of Sam calling out the fact that she, the CTO of the most valuable startup in the world, resigned with no notice for no apparent good reason.
Sam is lying or otherwise being substantially misleading about the circumstances of Mira’s resignation, i.e. it was not in fact a same-day surprise to him. (And thinks she won’t call him out on it?)
Of course it doesn’t make sense. It doesn’t have to. It just has to be a face-saving excuse for why she pragmatically told him at the last possible minute. (Also, it’s not obvious that the equity round hasn’t basically closed.)
At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn’t dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently ‘independent’ and ‘diverse’ directors to pad out the board and eventually be replaced by full Altman loyalists—although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)
As predicted, the full report will not be released, only the ‘summary’ focused on exonerating Altman. Also as predicted, ‘the mountain has given birth to a mouse’ and the report was narrowly scoped to just the firing: they bluster about “reviewing 30,000 documents” (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at “the events concerning the November 17, 2023 removal” and interviewed hardly anyone (“dozens of interviews” barely even covers the immediate dramatis personae, much less any kind of investigation into Altman’s chip stuff, Altman’s many broken promises, Brockman’s complainers etc). Doesn’t sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it… It also seems like they indeed did not promise confidentiality or set up any kind of anonymous reporting mechanism, given that they mention no such thing and include setting up a hotline for whistleblowers as a ‘recommendation’ for the future (ie. there was no such thing before or during the investigation). So, it was a whitewash from the beginning. Tellingly, there is nothing about Microsoft, and no hint their observer will be upgraded (or that there still even is one). And while flattering to Brockman, there is nothing about Murati—free tip to all my VC & DL startup acquaintances, there’s a highly competent AI manager who’s looking for exciting new opportunities, even if she doesn’t realize it yet.
Also entertaining is that you can see the media spin happening in real time. What WilmerHales signs off on:
WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.
Which is… less than complimentary? One would hope a CEO does a little bit better than merely not engage in ‘conduct which mandates removal’? And turns into headlines like
(Nothing from Kara Swisher so far, but judging from her Twitter, she’s too busy promoting her new book and bonding with Altmanover their mutual dislike of Elon Musk to spare any time for relatively-minor-sounding news.)
OK, so what was not as predicted? What is surprising?
This is not a full replacement board, but implies that Adam D’Angelo/Brett Taylor/Larry Summers are all staying on the board, at least for now. (So the new composition is D’Angelo/Taylor/Summers/Altman/Demond-Hellmann/Seligman/Simo plus the unknown Microsoft non-voting observer.) This is surprising, but it may simply be a quotidian logistics problem—they hadn’t settled on 3 more adequately diverse and prima-facie qualified OA board candidates yet, but the report was finished and it was more important to wind things up, and they’ll get to the remainder later. (Perhaps Brockman will get his seat back?)
EDIT: A HNer points out that today, March 8th, is “International Women’s Day”, and this is probably the reason for the exact timing of the announcement. If so, they may well have already picked the remaining candidates (Brockman?), but those weren’t women and so got left out of the announcement. Stay tuned, I guess. EDITEDIT: the video call/press conference seems to confirm that they do plan more board appointments: “OpenAI will continue to expand the board moving forward, according to a Zoom call with reporters.” So that is consistent with the hurried women-only announcement.
And while flattering to Brockman, there is nothing about Murati—free tip to all my VC & DL startup acquaintances, there’s a highly competent AI manager who’s looking for exciting new opportunities, even if she doesn’t realize it yet.
(Fixed. This is a surname typo I make an unbelievable number of times because I reflexively overcorrect it to ‘Sumners’, due to reading a lot more of Scott Sumner than Larry Summers. Ugh—just caught myself doing it again in a Reddit comment...)
It was either Hydrazine or YC. In either case, my point remains true: he’s chosen to not dispose of his OA stake, whatever vehicle it is held in, even though it would be easy for someone of his financial acumen to do so by a sale or equivalent arrangement, forcing an embarrassing asterisk to his claims to have no direct financial conflict of interest in OA LLC—and one which comes up regularly in bad OA PR (particularly by people who believe it is less than candid to say you have no financial interest in OA when you totally do), and a stake which might be quite large at this point*, and so is particularly striking given his attitude towards much smaller conflicts supposedly risking bad OA PR. (This is in addition to the earlier conflicts of interest in Hydrazine while running YC or the interest of outsiders in investing in Hydrazine, apparently as a stepping stone towards OA.)
* if he invested a ‘small’ amount via some vehicle before he even went full-time at OA, when OA was valued at some very small amount like $50m or $100m, say, and OA’s now valued at anywhere up to $90,000m or >900x more, and further, he strongly believes it’s going to be worth far more than that in the near-future… Sure, it may be worth ‘just’ $500m or ‘just’ $1000m after dilution or whatever, but to most people that’s pretty serious money!
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
Because there is not currently any evidence that Toner was going around talking to a bunch of people, whereas this says McCauley was doing so. If I have to guess “did Labenz talk to the person who was talking to a bunch of people in OA, or did he talk to the person who was as far as I know not talking to a bunch of people in OA?”, I am going to guess the former.
They weren’t the only non employee board members though—that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn’t be described nor recommended by Labenz’s acquaintance as researching AI safety, as that does not describe Hurd or D’Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does.
(The description of them not having used the model unfortunately does not distinguish either one—none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one—I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just ‘tools’, which are mere ‘stochastic parrots’, and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, ‘sampling can show the presence of knowledge and not the absence’, so if you don’t think there’s anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath—but it doesn’t. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
I was not referring to the podcast (which I haven’t actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz’s original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous “they’re committed” could just as easily refer to “are serious people on the board” who have “been chosen” for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
I consulted with a few friends in AI safety research…The Board, everyone agreed, included multiple serious people who were committed to safe development of AI and would definitely hear me out, look into the state of safety practice at the company, and take action as needed.What happened next shocked me. The Board member I spoke to was largely in the dark about GPT-4. They had seen a demo and had heard that it was strong, but had not used it personally. They said they were confident they could get access if they wanted to. I couldn’t believe it. I got access via a “Customer Preview” 2+ months ago, and you as a Board member haven’t even tried it‽ This thing is human-level, for crying out loud (though not human-like!).
This quote doesn’t say anything about the board member/s being people who are researching AI safety though—it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day—Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas.
The Superalignment compute-quota flashpoint is now confirmed. Aside from Jan Leike explicitly calling out compute-quota shortages post-coup (which strictly speaking doesn’t confirm shortages pre-coup), Fortune is now reporting that this was a serious & longstanding issue:
...According to a half-dozen sources familiar with the functioning of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.
Instead, according to the sources, the team repeatedly saw its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, turned down by OpenAI’s leadership, even though the team’s total compute budget never came close to the promised 20% threshold.
The revelations call into question how serious OpenAI ever was about honoring its public pledge, and whether other public commitments the company makes should be trusted. OpenAI did not respond to requests to comment for this story.
...It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next four years” to the effort. But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.
One source familiar with the Superalignment team’s work said that there were never any clear metrics around exactly how the 20% amount was to be calculated, leaving it subject to wide interpretation. For instance, the source said the team was never told whether the promise meant “20% each year for four years” or “5% a year for four years” or some variable amount that could wind up being “1% or 2% for the first three years, and then the bulk of the commitment in the fourth year.” In any case, all the sources Fortune spoke to for this story confirmed that the Superalignment team was never given anything close to 20% of OpenAI’s secured compute as of July 2023.
OpenAI researchers can also make requests for what is known as “flex” compute—access to additional GPU capacity beyond what has been budgeted—to deal with new projects between the quarterly budgeting meetings. But flex requests from the Superalignment team were routinely rejected by higher-ups, these sources said.
Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that these requests were being declined, the sources said, but others at the company, including chief technology officer Mira Murati, were involved in making the decisions. Neither McGrew nor Murati responded to requests to comment for this story.
While the team did carry out some research—it released a paper detailing its experiments in successfully getting a less powerful AI model to control a more powerful one in December 2023—the lack of compute stymied the team’s more ambitious ideas, the source said. After resigning, Leike on Friday published a series of posts on Twitter in which he criticized his former employer, saying “safety culture and processes have taken a backseat to shiny products.” He also said that “over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”
5 sources familiar with the Superalignment team’s work backed up Leike’s account, saying that the problems with accessing compute worsened in the wake of the pre-Thanksgiving showdown between Altman and the board of the OpenAI nonprofit foundation.
...One source disputed the way the other sources Fortune spoke to characterized the compute problems the Superalignment team faced, saying they predated Sutskever’s participation in the failed coup, plaguing the group from the get-go.
While there have been some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said this was not the case and that Sutskever had no access to the team’s work and played no role in directing the team after Thanksgiving. With Sutskever gone, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its compute allocation, the sources said.
...The people who spoke to Fortune did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested equity in the company, or both. Employees who have left OpenAI have been forced to sign separation agreements that include a strict non-disparagement clause that says the company can claw back their vested equity if they criticize the company publicly, or if they even acknowledge the clause’s existence. And employees have been told that anyone who refuses to sign the separation agreement will forfeit their equity as well.
There seems to be very little discussion of this story on Twitter. WP’s tweet about it got only 75k views and 59 likes as of now, even though WP has 2M followers.
(I guess Twitter will hide your tweets even from your followers if the engagement rate is low enough. Not sure what the cutoff is, but 1 like to 100 views doesn’t seem uncommon for tweets, and this one is only 1:1000. BTW what’s a good article to read to understand Twitter better?)
There’s two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight ‘spammers’ who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.
You only just joined Twitter so you wouldn’t have noticed the change, but even direct followers seem to be less likely to see a tweet if you’ve put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don’t show up in the WaPo thread or by a URL search.
Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don’t show up at all in the URL search), they are tweeting screenshots or Archive.is links. This is unnecessary (aside from the external link penalty of #1) since the WaPo has one of the most porous paywalls around which will scarcely hinder any readers, but this lets them inject their spin since you have to retweet them if you want to reshare it at all, impedes reading the article yourself to see if it’s as utterly terrible and meaningless as they claim, and makes it harder to search for any discussion (what, are you going to know to search for the random archive.is snapshot...? no, of course not).
* I continue to stubbornly include all relevant external links in my tweets rather than use workarounds, and see the penalty constantly. It has definitely soured me even further on Musk-Twitter, particularly as it is contrary to the noises Musk has made about the importance of freedom of speech and higher reliability of tweets—yeah, asshole, how are you going to have highly reliable tweets or a good information ecosystem if including sources & references is almost like a self-imposed ban? And then you share ad revenue with subscribers who tweet the most inflammatory poorly-sourced stuff, great incentive design you’ve hit upon… I’m curious to see how the experience is going to degrade even further—I wouldn’t put it past Musk to make subscriptions mandatory to try to seed the ‘X everything app’ as a hail mary for the failing Twitter business model. At least that might finally be enough to canonicalize a successor everyone can coordinate a move to.
Thanks for the explanations, but I’m not noticing a big “external links” penalty on my own tweets. Found some discussion of this penalty via Google, so it seems real but maybe not that “massive”? Also some of it dates to before Musk purchased Twitter. Can you point me to anything that says he increased the penalty by a lot?
Ah Musk actually published Twitter’s algorithms, confirming the penalty. Don’t see anyone else saying that he increased the penalty though.
BTW why do you “protect” your account (preventing non-followers from seeing your tweets)?
Ah Musk actually published Twitter’s algorithms, confirming the penalty. Don’t see anyone else saying that he increased the penalty though.
‘The algorithm’ is an emergent function of the entire ecosystem. I have no way of knowing what sort of downstream effects a tweak here or there would cause or the effects of post-Musk changes. I just know what I see: my tweets appear to have plummeted since Musk took over, particularly when I link to my new essays or documents etc.
If you want to do a more rigorous analysis, I export my Twitter analytics every few months (thank goodness Musk hasn’t disabled that to try to upsell people to the subscription—maybe he doesn’t know it’s there?) and could provide you my archives. (BTW, there is a moving window where you can only get the last few months, so if you think you will ever be interested in your Twitter traffic numbers, you need to start exporting them every 2-3 months now, or else the historical data will become inaccessible. I don’t know if you can restore access to old ones by signing up as an advertiser.) EDIT: I looked at the last full pre-Musk month and my last month, and I’ve lost ~75% of views/clicks/interactions, despite trying to use Twitter in the same way.
As for the ‘published’ algorithm, I semi-believe it is genuine (albeit doubtless incomplete) because Musk was embarrassed that it exposed how some parts of the new algorithm are manipulating Twitter to make Musk look more popular (confirming earlier reporting that Musk had ordered such changes after getting angry his views were dropping due to his crummy tweets), but that is also why it hasn’t been updated in almost half a year, apparently. God knows what the real thing is like by now...
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself
Latest news: Time sheds considerably more light on the board position, in its discouragingly-named piece “2023 CEO of the Year: Sam Altman” (excerpts; HN). While it sounds & starts like a puff piece (no offense to Ollie—cute coyote photos!), it actually contains a fair bit of leaking I haven’t seen anywhere else. Most strikingly:
claims that the Board thought it had the OA executives on its side, because the executives had approached it about Altman:
(The wording here strongly implies it was not Sutskever.) This of course greatly undermines the “incompetent Board” narrative, possibly explains both why the Board thought it could trust Mira Murati & why she didn’t inform Altman ahead of time (was she one of those execs...?), and casts further doubt on the ~100% signature rate of the famous OA employee letter.
Now that it’s safe(r) to say negative things about Altman, because it has become common knowledge that he was fired from Y Combinator and there is an independent investigation planned at OA, it seems that more of these incidents have been coming to light.
confirms my earlier interpretation that at least one of the dishonesties was specifically lying to a board member that another member wanted to immediately fire Toner to manipulate them into her ouster:
EDIT: WSJ (excerpts) is also reporting this, by way of a Helen Toner interview (which doesn’t say much on the record, but does provide context for why she said that quote everyone used as a club against her: an OA lawyer lied to her about her ‘fiduciary duties’ while threatening to sue & bankrupt, and she got mad and pointed out that even outright destroying OA would be consistent with the mission & charter so she definitely didn’t have any ‘fiduciary duty’ to maximize OA profits).
Unfortunately, the Time article, while seeming to downplay how much the Toner incident mattered by saying it didn’t “spur” the decision, doesn’t explain what did spur it, nor refer to the Sutskever Slack discussion AFAICT. So I continue to maintain that Altman was moving to remove Toner so urgently in order to hijack the board, and that this attempt was one of the major concerns, and his deception around Toner’s removal, and particularly the executives discussing the EA purge, was probably the final proximate cause which was concrete enough & came with enough receipts to remove whatever doubt they had left (whether that was “the straw that broke the camel’s back” or “the smoking gun”).
continues to undermine ‘Q* truthers’ by not even mentioning it (except possibly a passing reference by Altman at the end to “doubling down on certain research areas”)
The article does provide good color and other details I won’t try to excerpt in full (although some are intriguing—where, exactly, was this feedback to Altman about him being dishonest in order to please people?), eg:
If you’ve noticed OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as
@tszzl
and never as@roon
, it’s because another set of leaks has dropped, and they are again unflattering to Sam Altman & consistent with the previous ones.Today the Washington Post adds to the pile, “Warning from OpenAI leaders helped trigger Sam Altman’s ouster: The senior employees described Altman as psychologically abusive, creating delays at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO” (archive.is; HN; excerpts), which confirms the Time/WSJ reporting about executives approaching the board with concerns about Altman, and adds on more details—their concerns did not relate to the Toner dispute, but apparently was about regular employees:
(I continue to speculate Superalignment was involved, if only due to their enormous promised compute-quota and small headcount, but the wording here seems like it involved more than just a single team or group, and also points back to some of the earlier reporting and the other open letter, so there may be many more incidents than appreciated.)
An elaboration on the WaPo article in the 2023-12-09 NYT: “Inside OpenAI’s Crisis Over the Future of Artificial Intelligence: Split over the Leadership of Sam Altman, Board Members and Executives Turned on One Another. Their Brawl Exposed the Cracks at the Heart of the AI Movement” (excerpts). Mostly a gossipy narrative from both the Altman & D’Angelo sides, so I’ll just copy over my HN comment:
another reporting of internal OA complaints about Altman’s manipulative/divisive behavior, see previously on HN
previously we knew Altman had been dividing-and-conquering the board by lying about others wanted to fire Toner, this says that specifically, Altman had lied about McCauley wanting to fire Toner; presumably, this was said to D’Angelo.
Concerns over Tigris had been mooted, but this says specifically that the board thought Altman had not been forthcoming about it; still unclear if he had tried to conceal Tigris entirely or if he had failed to mention something more specific like who he was trying to recruit for capital.
Sutskever had threatened to quit after Jakub Pachocki’s promotion; previous reporting had said he was upset about it, but hadn’t hinted at him being so angry as to threaten to quit OA
Sutskever doesn’t seem to be too rank/promotion-hungry (why would he be? he is, to quote one article ‘a god of AI’ and is now one of the most-cited researchers ever) and one would think it would take a lot for him to threaten to quit OA… Coverage thus far seems to be content to take the attitude that he must be some sort of 5-year-old child throwing a temper tantrum over a slight, but I find this explanation inadequate.
I suspect there is much more to this thread, and it may tie back to Superalignment & broken promises about compute-quotas. (Permanently subverting or killing safety research does seem like adequate grounds for Sutskever to deliver an ultimatum, for the same reasons that the Board killing OA can be the best of several bad options.)
Altman was ‘bad-mouthing the board to OpenAI executives’; this likely refers to the Slack conversation Sutskever was involved in reported by WSJ a while ago about how they needed to purge everyone EA-connected
Altman was initially going to cooperate and even offered to help, until Brian Chesky & Ron Conway riled him up. (I believe this because it is unflattering to Altman.)
the OA outside lawyer told them they needed to clam up and not do PR like the Altman faction was
both sides are positioning themselves for the independent report overseen by Summers as the ‘broker’; hence, Altman/Conway leaking the texts quoted at the end posturing about how ‘the board wants silence’ (not that one could tell from the post-restoration leaking & reporting...) and how his name needs to be cleared.
Paul Graham remains hilariously incapable of saying anything unambiguously nice about Altman
One additional thing I’d note: the NYT mentions a quotes from Whatsapp channel that contained hundreds of top SV execs & VCs. This is the sort of thing that you always suspected existed given how coordinated some things seem to be, but this is a striking confirmation of existence. It is also, given the past history of SV like the big wage-fixing scandals of Steve Jobs et al, something I expect contains a lot of statements that they really would not want a prosecutor seeing. One wonders how prudent they have been about covering up message history, and complying with SEC & other regulatory rules about destruction of logs, and if some subpoenas are already winging their way out?
EDIT: Zvi commentary reviewing the past few articles, hitting most of the same points: https://thezvi.substack.com/p/openai-leaks-confirm-the-story/
The WSJ dashes our hopes for a quiet Christmas by dropping on Christmas Eve a further extension of all this reporting: “Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends: The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights”, Seetharam et al 2024-12-24 (Archive.is, HN; annotated excerpts).
This article confirms—among other things—what I suspected about there being an attempt to oust Altman from Loopt for the same reasons as YC/OA, adds some more examples of Altman amnesia & behavior (including what is, since people apparently care, being caught in a clearcut unambiguous public lie), names the law firm in charge of the report (which is happening), and best of all, explains why Sutskever was so upset about the Jakub Pachocki promotion.
Loopt coup: Vox had hinted at this in 2014 but it was unclear; however, WSJ specifically says that Loopt was in chaos and Altman kept working on side-projects while mismanaging Loopt (so, nearly identical to the much later, unconnected, YC & OA accusations), leading to the ‘senior employees’ to (twice!) appeal to the board to fire Altman. You know who won. To quote one of his defenders:
Sequoia Capital: the journalists also shed light on the Loopt acquisition. There have long been rumors about the Loopt acquisition by Green Dot being shady (also covered in that Vox article), especially as Loopt didn’t seem to go anywhere under Green Dot so it hardly looked like a great or natural acquisition—but it was unclear how and the discussions seemed to guess that Altman had sold Loopt in a way which made him a lot of money but shafted investors. But it seems that what actually happened was that, again on the side of his Loopt day-job, Altman was doing freelance VC work for Sequoia Capital, and was responsible for getting them into one of the most lucrative startup rounds ever, Stripe. Sequoia then ‘helped engineer an acquisition by another Sequoia-backed company’, Green Dot.
The journalists don’t say this, but the implication here is that Loopt’s acquisition was a highly-deniable kickback to Altman from Sequoia for Stripe & others.
Greg Brockman: also Stripe-related, Brockman’s apparently intense personal loyalty to Altman may stem from this period, where Altman apparently did Brockman a big favor by helping broker the sale of his Stripe shares.
YC firing: some additional details like Jessica Livingston instigating it, one grievance being his hypocrisy over banning outside funds for YC partners (other than him), and also a clearcut lie by Altman: he posted the YC announcement blog post saying he had been moved to YC Chairman… but YC had not, and never did, agreed to that. So that’s why the YC announcements kept getting edited—he’d tried to hustle them into appointing him Chairman to save his face.
Nice try, but no cigar. (This is something to keep in mind given my earlier comments about Altman talking about his pride in creating a mature executive team etc—if, after the report is done, he stops being CEO and becomes OA board chairman, that means he’s been kicked out of OA.)
Ilya Sutskever: as mentioned above, I felt that we did not have the full picture why Sutskever was so angered by Jakub Pachocki’s promotion. This answers it! Sutskever was angry because he has watched Altman long enough to understand what the promotion meant:
Ilya recognized the pattern perhaps in part because he has receipts:
Speaking of receipts, the law firm for the independent report has been chosen: WilmerHale. Unclear if they are investigating yet, but I continue to doubt that it will be done before the tender closes early next month.
the level of sourcing indicates Altman’s halo is severely damaged (“This article is based on interviews with dozens of executives, engineers, current and former employees and friend’s of Altman’s, as well as investors.”). Before, all of this was hidden; as the article notes of the YC firing:
If even Altman’s mentor didn’t know, no wonder no one else seems to have known—aside from those directly involved in the firing, like, for example, YC board member Emmett Shear. But now it’s all on the record, with even Graham & Livingston acknowledging the firing (albeit quibbling a little: come on, Graham, if you ‘agree to leave immediately’, that’s still ‘being fired’).
Tash McCauley’s role finally emerges a little more: she had been trying to talk to OA executives without Altman’s presence, and Altman demanded to be informed of any Board communication with employees. It’s unclear if he got his way.
So, a mix of confirmation and minor details continuing to flesh out the overall saga of Sam Altman as someone who excels at finance, corporate knife-fighting, & covering up manipulation but who is not actually that good at managing or running a company (reminiscent of Xi Jinping), and a few surprises for me.
On a minor level, if McCauley had been trying to talk to employees, then it’s more likely that she was the one that the whistleblowers like Nathan Labenz had been talking to rather than Helen Toner; Toner might have been just the weakest link in her public writings providing a handy excuse. (...Something something 5 lines by the most honest of men...) On a more important level, if Sutskever has a list of 20 documented instances (!) of Altman lying to OA executives (and the Board?), then the Slack discussion may not have been so important after all, and Altman may have good reason to worry—he keeps saying he doesn’t recall any of these unfortunate episodes, and it is hard to defend yourself if you can no longer remember what might turn up...
An OA update: it’s been quiet, but the investigation is over. And Sam Altman won. (EDIT: yep.)
To recap, because I believe I haven’t been commenting on this since December (this is my last big comment, skimming my LW profile): WilmerHale was brought in to do the investigation. The tender offer, to everyone’s relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Temasek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA). Ilya Sutskever remains in hiding and lawyered up (his silence became particularly striking with the release of Sora). There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close—and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it’s both irrelevant & unimportant.)
The news today comes from the NYT leaking information from the final report: “Key OpenAI Executive [Mira Murati] Played a Pivotal Role in Sam Altman’s Ouster” (mirror; EDIT: largely confirmed by Murati in internal note).
The main theme of the article is clarifying Murati’s role: as I speculated, she was in fact telling the Board about Altman’s behavior patterns, and it fills in that she had gone further and written it up in a memo to him, and even threatened to leave with Sutskever.
But it reveals a number of other important claims: the investigation is basically done and wrapping up. The new board apparently has been chosen. Sutskever’s lawyer has gone on the record stating that Sutskever did not approach the board about Altman (?!). And it reveals the board confronted Altman over his ownership of the OA VC Fund (in addition to all his many other compromises of interest**) imply.
So, what does that mean?
First, as always, in a war of leaks, cui bono? Who is leaking this to the NYT? Well, it’s not the pro-Altman faction: they are at war with the NYT, and these leaks do them no good whatsoever. It’s not the lawyers: these are high-powered elite lawyers, hired for confidentiality and discretion. It’s not Murati or Sutskever, given their lack of motive, and the former’s panicked internal note & Sutskever’s lawyer’s denial. Of the current interim board (which is about to finish its job and leave, handing it over to the expanded replacement board), probably not Larry Summers/Brett Taylor—they were brought on to oversee the report as neutral third party arbitrators, and if they (a simple majority of the temporary board) want something in their report, no one can stop them from putting it there. It could be Adam D’Angelo or the ex-board: they are the ones who don’t control the report, and they also already have access to all of the newly-leaked-but-old information about Murati & Sutskever & the VC Fund.
So, it’s the anti-Altman faction, associated with the old board. What does that mean?
I think that what this leak indirectly reveals is simple: Sam Altman has won. The investigation will exonerate him, and it is probably true that it was so narrowly scoped from the beginning that it was never going to plausibly provide grounds for his ouster. What these leaks are, are a loser’s spoiler move: the last gasps of the anti-Altman faction, reduced to leaking bits from the final report to friendly media (Metz/NYT) to annoy Altman, and strike first. They got some snippets out before the Altman faction shops around highly selective excerpts to their own friendly media outlets (the usual suspects—The Information, Semafor, Kara Swisher) from the final officialized report to set the official record (at which point the rest of the confidential report is sent down the memory hole). Welp, it’s been an interesting few months, but l’affaire Altman is over. RIP.
Evidence, aside from simply asking who benefits from these particular leaks at the last minute, is that Sutskever remains in hiding & his lawyer is implausibly denying he had anything to do with it, while if you read Altman on social media, you’ll notice that he’s become ever more talkative since December, particularly in the last few weeks—glorying in the instant memeification of ‘$7 trillion’ - as has OA PR* and we have heard no more rhetoric about what an amazing team of execs OA has and how he’s so proud to have tutored them to replace him. Because there will be no need to replace him now. The only major reasons he will have to leave is if it’s necessary as a stepping stone to something even higher (eg. running the $7t chip fab consortium, running for US President) or something like a health issue.
So, upshot: I speculate that the report will exonerate Altman (although it can’t restore his halo, as it cannot & will not address things like his firing from YC which have been forced out into public light by this whole affair) and he will be staying as CEO and may be returning to the expanded board; the board will probably include some weak uncommitted token outsiders for their diversity and independence, but have an Altman plurality and we will see gradual selective attrition/replacement in favor of Altman loyalists until he has a secure majority robust to at least 1 flip and preferably 2. Having retaken irrevocable control of OA, further EA purges should be unnecessary, and Altman will probably refocus on the other major weakness exposed by the coup: the fact that his frenemy MS controls OA’s lifeblood. (The fact that MS was such a potent weapon for Altman in the fight is a feature while he’s outside the building, but a severe bug once he’s back inside.) People are laughing at the ‘$7 trillion’. But Altman isn’t laughing. Those GPUs are life and death for OA now. And why should he believe he can’t do it? Things have always worked out for him before...
Predictions, if being a bit more quantitative will help clarify my speculations here: Altman will still be CEO of OA on June 1st (85%); the new OA board will include Altman (60%); Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%; cf. Murati’s desperate-sounding internal note); the full unexpurgated non-summary report will not be released (85%, may be hard to judge because it’d be easy to lie about); serious chip fab/Tigris efforts will continue (75%); Microsoft’s observer seat will be upgraded to a voting seat (25%).
* Eric Newcomer (usually a bit more acute than this) asks “One thing that I find weird: OpenAI comms is giving very pro Altman statements when the board/WilmerHale are still conducting the investigation. Isn’t communications supposed to work for the company, not just the CEO? The board is in charge here still, no?” NARRATOR: “The board is not in charge still.”
** Compare the current OA PR statement on the VC Fund to Altman’s past position on, say, Helen Toner or Reid Hoffman or Shivon Zilis, or Altman’s investment in chip startups touting letters of commitment from OA or his ongoing Hydrazine investment in OA which sadly, he has never quite had the time to dispose of in any of the OA tender offers. As usual, CoIs only apply to people Altman doesn’t trust—“for my friends, everything; for my enemies, the law”.
EDIT: Zvi commentary: https://thezvi.substack.com/p/openai-the-board-expands
Mira Murati announced today she is resigning from OA. (I have also, incidentally, won a $1k bet with an AI researcher on this prediction.)
Do you think this will have any impact on OpenAI’s future revenues / ability to deliver frontier-level models?
See my earlier comments on 23 June 2024 about what ‘OA rot’ would look like; I do not see any revisions necessary given the past 3 months.
As for Murati finally leaving (perhaps she was delayed by the voice shipping delays), I don’t think it matters too much as far as I could tell (not like Sutskever or Brockman leaving), she was competent but not critical; probably the bigger deal is that her leaving is apparently a big surprise to a lot of OAers (maybe I should’ve taken more bets?), and so will come as a blow to morale and remind people of last year’s events.
EDIT: Zoph Barret & Bob McGrew are now gone too. Altman has released a statement, confirming that Murati only quit today:
(I wish Dr Achiam much luck in his new position at Hogwarts.)
It does not actually make any sense to me that Mira wanted to prevent leaks, and therefore didn’t even tell Sam that she was leaving ahead of time. What would she be afraid of, that Sam would leak the fact that she was planning to leave… for what benefit?
Possibilities:
She was being squeezed out, or otherwise knew her time was up, and didn’t feel inclined to make it a maximally comfortable parting for OpenAI. She was willing to eat the cost of her own equity potentially losing a bunch of value if this derailed the ongoing investment round, as well as the reputational cost of Sam calling out the fact that she, the CTO of the most valuable startup in the world, resigned with no notice for no apparent good reason.
Sam is lying or otherwise being substantially misleading about the circumstances of Mira’s resignation, i.e. it was not in fact a same-day surprise to him. (And thinks she won’t call him out on it?)
???
Of course it doesn’t make sense. It doesn’t have to. It just has to be a face-saving excuse for why she pragmatically told him at the last possible minute. (Also, it’s not obvious that the equity round hasn’t basically closed.)
Looks like you were right, at least if the reporting in this article is correct, and I’m interpreting the claim accurately.
At least from the intro, it sounds like my predictions were on-point: re-appointed Altman (I waffled about this at 60% because while his narcissism/desire to be vindicated requires him to regain his board seat, because anything less is a blot on his escutcheon, and also the pragmatic desire to lock down the board, both strongly militated for his reinstatement, it also seems so blatant a powergrab in this context that surely he wouldn’t dare...? guess he did), released to an Altman outlet (The Information), with 3 weak apparently ‘independent’ and ‘diverse’ directors to pad out the board and eventually be replaced by full Altman loyalists—although I bet if one looks closer into these three women (Sue Desmond-Hellmann, Nicole Seligman, & Fidji Simo), one will find at least one has buried Altman ties. (Fidji Simo, Instacart CEO, seems like the most obvious one there: Instacart was YC S12.)
The official OA press releases are out confirming The Information: https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai https://openai.com/blog/openai-announces-new-members-to-board-of-directors
He’s probably right.
As predicted, the full report will not be released, only the ‘summary’ focused on exonerating Altman. Also as predicted, ‘the mountain has given birth to a mouse’ and the report was narrowly scoped to just the firing: they bluster about “reviewing 30,000 documents” (easy enough when you can just grep Slack + text messages + emails...), but then admit that they looked only at “the events concerning the November 17, 2023 removal” and interviewed hardly anyone (“dozens of interviews” barely even covers the immediate dramatis personae, much less any kind of investigation into Altman’s chip stuff, Altman’s many broken promises, Brockman’s complainers etc). Doesn’t sound like they have much to show for over 3 months of work by the smartest & highest-paid lawyers, does it… It also seems like they indeed did not promise confidentiality or set up any kind of anonymous reporting mechanism, given that they mention no such thing and include setting up a hotline for whistleblowers as a ‘recommendation’ for the future (ie. there was no such thing before or during the investigation). So, it was a whitewash from the beginning. Tellingly, there is nothing about Microsoft, and no hint their observer will be upgraded (or that there still even is one). And while flattering to Brockman, there is nothing about Murati—free tip to all my VC & DL startup acquaintances, there’s a highly competent AI manager who’s looking for exciting new opportunities, even if she doesn’t realize it yet.
Also entertaining is that you can see the media spin happening in real time. What WilmerHales signs off on:
Which is… less than complimentary? One would hope a CEO does a little bit better than merely not engage in ‘conduct which mandates removal’? And turns into headlines like
“OpenAI’s Sam Altman Returns to Board After Probe Clears Him”
(Nothing from Kara Swisher so far, but judging from her Twitter, she’s too busy promoting her new book and bonding with Altman over their mutual dislike of Elon Musk to spare any time for relatively-minor-sounding news.)
OK, so what was not as predicted? What is surprising?
This is not a full replacement board, but implies that Adam D’Angelo/Brett Taylor/Larry Summers are all staying on the board, at least for now. (So the new composition is D’Angelo/Taylor/Summers/Altman/Demond-Hellmann/Seligman/Simo plus the unknown Microsoft non-voting observer.) This is surprising, but it may simply be a quotidian logistics problem—they hadn’t settled on 3 more adequately diverse and prima-facie qualified OA board candidates yet, but the report was finished and it was more important to wind things up, and they’ll get to the remainder later. (Perhaps Brockman will get his seat back?)
EDIT: A HNer points out that today, March 8th, is “International Women’s Day”, and this is probably the reason for the exact timing of the announcement. If so, they may well have already picked the remaining candidates (Brockman?), but those weren’t women and so got left out of the announcement. Stay tuned, I guess. EDITEDIT: the video call/press conference seems to confirm that they do plan more board appointments: “OpenAI will continue to expand the board moving forward, according to a Zoom call with reporters.” So that is consistent with the hurried women-only announcement.
Heh, here it is: https://x.com/miramurati/status/1839025700009030027
Nitpick: Larry Summers not Larry Sumners
(Fixed. This is a surname typo I make an unbelievable number of times because I reflexively overcorrect it to ‘Sumners’, due to reading a lot more of Scott Sumner than Larry Summers. Ugh—just caught myself doing it again in a Reddit comment...)
Yeah I figured Scott Sumner must have been involved.
Source?
@gwern I’ve failed to find a source saying that Hydrazine invested in OpenAI. If it did, that would be a big deal; it would make this a lie.
It was either Hydrazine or YC. In either case, my point remains true: he’s chosen to not dispose of his OA stake, whatever vehicle it is held in, even though it would be easy for someone of his financial acumen to do so by a sale or equivalent arrangement, forcing an embarrassing asterisk to his claims to have no direct financial conflict of interest in OA LLC—and one which comes up regularly in bad OA PR (particularly by people who believe it is less than candid to say you have no financial interest in OA when you totally do), and a stake which might be quite large at this point*, and so is particularly striking given his attitude towards much smaller conflicts supposedly risking bad OA PR. (This is in addition to the earlier conflicts of interest in Hydrazine while running YC or the interest of outsiders in investing in Hydrazine, apparently as a stepping stone towards OA.)
* if he invested a ‘small’ amount via some vehicle before he even went full-time at OA, when OA was valued at some very small amount like $50m or $100m, say, and OA’s now valued at anywhere up to $90,000m or >900x more, and further, he strongly believes it’s going to be worth far more than that in the near-future… Sure, it may be worth ‘just’ $500m or ‘just’ $1000m after dilution or whatever, but to most people that’s pretty serious money!
Why do you think McCauley is likely to be the board member Labenz spoke to? I had inferred that it was someone not particularly concerned about safety given that Labenz reported them saying they could be easily request access to the model if they’d wanted to (and hadn’t). I took the point of the anecdote to be ‘here was a board member not concerned about safety’.
Because there is not currently any evidence that Toner was going around talking to a bunch of people, whereas this says McCauley was doing so. If I have to guess “did Labenz talk to the person who was talking to a bunch of people in OA, or did he talk to the person who was as far as I know not talking to a bunch of people in OA?”, I am going to guess the former.
They weren’t the only non employee board members though—that’s what I meant by the part about not being concerned about safety, that I took it to rule out both Toner and McCauley.
(Although it for some other reason you were only looking at Toner and McCauley, then no, I would say the person going around speaking to OAI employees is_less_ likely to be out of the loop on GPT-4’s capabilities)
The other ones are unlikely. Shivon Zilis & Reid Hoffman had left by this point; Will Hurd might or might not still be on the board at this point but wouldn’t be described nor recommended by Labenz’s acquaintance as researching AI safety, as that does not describe Hurd or D’Angelo; Brockman, Altman, and Sutskever are right out (Sutskever researches AI safety but Superalignment was a year away); by process of elimination, over 2023, the only board members he could have been plausibly contacting would be Toner and McCauley, and while Toner weakly made more sense before, now McCauley does.
(The description of them not having used the model unfortunately does not distinguish either one—none of the writings connected to them sound like they have all that much hands-on experience and would be eagerly prompt-engineering away at GPT-4-base the moment they got access. And I agree that this is a big mistake, but it is, even more unfortunately, and extremely common one—I remain shocked that Altman had apparently never actually used GPT-3 before he basically bet the company on it. There is a widespread attitude, even among those bullish about the economics, that GPT-3 or GPT-4 are just ‘tools’, which are mere ‘stochastic parrots’, and have no puzzling internal dynamics or complexities. I have been criticizing this from the start, but the problem is, ‘sampling can show the presence of knowledge and not the absence’, so if you don’t think there’s anything interesting there, your prompts are a mirror which reflect only your low expectations; and the safety tuning makes it worse by hiding most of the agency & anomalies, often in ways that look like good things. For example, the rhyming poetry ought to alarm everyone who sees it, because of what it implies underneath—but it doesn’t. This is why descriptions of Sydney or GPT-4-base are helpful: they are warning shots from the shoggoth behind the friendly tool-AI ChatGPT UI mask.)
I think you might be misremembering the podcast? Nathan said that he was assured that the board as a whole was serious about safety, but I don’t remember the specific board member being recommended as someone researching AI safety (or otherwise more pro safety than the rest of the board). I went back through the transcript to check and couldn’t find any reference to what you’ve said.
“ And ultimately, in the end, basically everybody said, “What you should do is go talk to somebody on the OpenAI board. Don’t blow it up. You don’t need to go outside of the chain of command, certainly not yet. Just go to the board. And there are serious people on the board, people that have been chosen to be on the board of the governing nonprofit because they really care about this stuff. They’re committed to long-term AI safety, and they will hear you out. And if you have news that they don’t know, they will take it seriously.” So I was like, “OK, can you put me in touch with a board member?” And so they did that, and I went and talked to this one board member. And this was the moment where it went from like, “whoa” to “really whoa.””
(https://80000hours.org/podcast/episodes/nathan-labenz-openai-red-team-safety/?utm_campaign=podcast__nathan-labenz&utm_source=80000+Hours+Podcast&utm_medium=podcast#excerpt-from-the-cognitive-revolution-nathans-narrative-001513)
I was not referring to the podcast (which I haven’t actually read yet because from the intro it seems wildly out of date and from a long time ago) but to Labenz’s original Twitter thread turned into a Substack post. I think you misinterpret what he is saying in that transcript because it is loose and extemporaneous “they’re committed” could just as easily refer to “are serious people on the board” who have “been chosen” for that (implying that there are other members of the board not chosen for that); and that is what he says in the written down post:
This quote doesn’t say anything about the board member/s being people who are researching AI safety though—it’s Nathan’s friends who are in AI safety research not the board members.
I agree that based on this quote, it could have very well been just a subset of the board. But I believe Nathan’s wife works for CEA (and he’s previously MCed an EAG), and Tasha is (or was?) on the board of EVF US, and so idk, if it’s Tasha he spoke to and the “multiple people” was just her and Helen, I would have expected a rather different description of events/vibe. E.g. something like ‘I googled who was on the board and realised that two of them were EAs, so I reached out to discuss’. I mean maybe that is closer to what happened and it’s just being obfuscated, either way is confusing to me tbh.
Btw, by “out of date” do you mean relative to now, or to when the events took place? From what I can see, the tweet thread, the substack post and the podcast were all published the same day—Nov 22nd 2023. The link I provided is just 80k excerpting the original podcast.
The Superalignment compute-quota flashpoint is now confirmed. Aside from Jan Leike explicitly calling out compute-quota shortages post-coup (which strictly speaking doesn’t confirm shortages pre-coup), Fortune is now reporting that this was a serious & longstanding issue:
There seems to be very little discussion of this story on Twitter. WP’s tweet about it got only 75k views and 59 likes as of now, even though WP has 2M followers.
(I guess Twitter will hide your tweets even from your followers if the engagement rate is low enough. Not sure what the cutoff is, but 1 like to 100 views doesn’t seem uncommon for tweets, and this one is only 1:1000. BTW what’s a good article to read to understand Twitter better?)
There’s two things going on. First, Musk-Twitter appears to massively penalize external links. Musk has vowed to fight ‘spammers’ who post links on Twitter to what are other sites (gasp) - the traitorous scum! Substack is only the most abhorred of these vile parasites, but all shall be brought to justice in due course. There is no need for other sites. You should be posting everything on Twitter as longform tweets (after subscribing), obviously.
You only just joined Twitter so you wouldn’t have noticed the change, but even direct followers seem to be less likely to see a tweet if you’ve put a link in it. So tweeters are increasingly reacting by putting the external link at the end of a thread in a separate quarantine tweet, not bothering with the link at all, or just leaving Twitter under the constant silent treatment that high-quality tweeting gets you these days.* So, many of the people who would be linking or discussing it are either not linking it or not discussing it, and don’t show up in the WaPo thread or by a URL search.
Second, OAers/pro-Altman tweets are practicing the Voldemort strategy: instead of linking the WaPo article at all (note that roon, Eigenrobot etc don’t show up at all in the URL search), they are tweeting screenshots or Archive.is links. This is unnecessary (aside from the external link penalty of #1) since the WaPo has one of the most porous paywalls around which will scarcely hinder any readers, but this lets them inject their spin since you have to retweet them if you want to reshare it at all, impedes reading the article yourself to see if it’s as utterly terrible and meaningless as they claim, and makes it harder to search for any discussion (what, are you going to know to search for the random archive.is snapshot...? no, of course not).
* I continue to stubbornly include all relevant external links in my tweets rather than use workarounds, and see the penalty constantly. It has definitely soured me even further on Musk-Twitter, particularly as it is contrary to the noises Musk has made about the importance of freedom of speech and higher reliability of tweets—yeah, asshole, how are you going to have highly reliable tweets or a good information ecosystem if including sources & references is almost like a self-imposed ban? And then you share ad revenue with subscribers who tweet the most inflammatory poorly-sourced stuff, great incentive design you’ve hit upon… I’m curious to see how the experience is going to degrade even further—I wouldn’t put it past Musk to make subscriptions mandatory to try to seed the ‘X everything app’ as a hail mary for the failing Twitter business model. At least that might finally be enough to canonicalize a successor everyone can coordinate a move to.
Thanks for the explanations, but I’m not noticing a big “external links” penalty on my own tweets. Found some discussion of this penalty via Google, so it seems real but maybe not that “massive”? Also some of it dates to before Musk purchased Twitter. Can you point me to anything that says he increased the penalty by a lot?
Ah Musk actually published Twitter’s algorithms, confirming the penalty. Don’t see anyone else saying that he increased the penalty though.
BTW why do you “protect” your account (preventing non-followers from seeing your tweets)?
‘The algorithm’ is an emergent function of the entire ecosystem. I have no way of knowing what sort of downstream effects a tweak here or there would cause or the effects of post-Musk changes. I just know what I see: my tweets appear to have plummeted since Musk took over, particularly when I link to my new essays or documents etc.
If you want to do a more rigorous analysis, I export my Twitter analytics every few months (thank goodness Musk hasn’t disabled that to try to upsell people to the subscription—maybe he doesn’t know it’s there?) and could provide you my archives. (BTW, there is a moving window where you can only get the last few months, so if you think you will ever be interested in your Twitter traffic numbers, you need to start exporting them every 2-3 months now, or else the historical data will become inaccessible. I don’t know if you can restore access to old ones by signing up as an advertiser.) EDIT: I looked at the last full pre-Musk month and my last month, and I’ve lost ~75% of views/clicks/interactions, despite trying to use Twitter in the same way.
As for the ‘published’ algorithm, I semi-believe it is genuine (albeit doubtless incomplete) because Musk was embarrassed that it exposed how some parts of the new algorithm are manipulating Twitter to make Musk look more popular (confirming earlier reporting that Musk had ordered such changes after getting angry his views were dropping due to his crummy tweets), but that is also why it hasn’t been updated in almost half a year, apparently. God knows what the real thing is like by now...
Could you link to some examples of “ OAers being angry on Twitter today, and using profanity & bluster and having oddly strong opinions about how it is important to refer to roon as @tszzl and never as @roon”? I don’t have a twitter account so can’t search myself