Alright, so I’ve been following the latest OpenAI Twitter freakout, and here’s some urgent information about the latest closed-doors developments that I’ve managed to piece together:
Following OpenAI Twitter freakouts is a colossal, utterly pointless waste of your time and you shouldn’t do it ever.
If you saw this comment of Gwern’s going around and were incredibly alarmed, you should probably undo the associated update regarding AI timelines (at least partially, see below).
OpenAI may be running some galaxy-brained psyops nowadays.
Here’s the sequence of events, as far as I can tell:
Some Twitter accounts that are (claiming, without proof, to be?) associated with OpenAI are being very hype about some internal OpenAI developments.
Gwern posts this comment suggesting an explanation for point 1.
Several accounts (e. g., one, two) claiming (without proof) to be OpenAI insiders start to imply that:
An AI model recently finished training.
Its capabilities surprised and scared OpenAI researchers.
It produced some innovation/is related to OpenAI’s “Level 4: Innovators” stage of AGI development.
The stories told by the accounts above start to mention that the new breakthrough is similar to GPT-4b: that it’s some AI model that produced an innovation in “health and longevity”. But also, that it’s broader than GPT-4b, and that the full breadth of this new model’s surprising emergent capabilities is unclear. (One, two, three.)
Noam Brown, an actual confirmed OpenAI researcher, complains about “vague AI hype on social media”, and states they haven’t yet actually achieved superintelligence.
The Axios story comes out, implying that OpenAI has developed “PhD-level superagents” and that Sam Altman is going to brief Trump on them. Of note:
If you put on Bounded Distrust lens, you can see that the “PhD-level superagents” claim is entirely divorced from any actual statements made by OpenAI people. The article ties-in a Mark Zuckerberg quote instead, etc. Overall, the article weaves the impression it wants to create out of vibes (with which it’s free to lie) and not concrete factual statements.
The “OpenAI insiders” gradually ramp up the intensity of their story all the while, suggesting that the new breakthrough would allow ASI in “weeks, not years”, and also that OpenAI won’t release this “o4-alpha” until 2026 because they have a years-long Master Plan, et cetera. Example, example.
Sam Altman complains about “twitter hype” being “out of control again”.
First, let’s dispel any notion that the hype accounts are actual OpenAI insiders who know what they are talking about:
“Satoshi” claims to be blackmailing OpenAI higher-ups in order to be allowed to shitpost classified information on Twitter. I am a bit skeptical of this claim, to put it mildly.
“Riley Coyote” has a different backstory which is about as convincing by itself, and which also suggests that “Satoshi” is “Riley”’s actual source.
As far as I can tell digging into the timeline, both accounts just started acting as if they are OpenAI associates posting leaks. Not even, like, saying that they’re OpenAI associates posting leaks, much less proving that. Just starting to act as if they’re OpenAI associates and that everyone knows this. Their tweets then went viral. (There’s also the strawberry guy, who also implies to be an OpenAI insider, who also joined in on the above hype-posting, and who seems to have been playing this same game for a year now. But I’m tired of looking up the links, and the contents are intensely unpleasant. Go dig through that account yourself if you want.)
In addition, none of the OpenAI employee accounts with real names that I’ve been able to find have been participating in this hype cycle. So if OpenAI allowed its employees to talk about what happened/is happening, why weren’t any confirmed-identity accounts talking about it (except Noam’s, deflating it)? Why only the anonymous Twitter people?
Well, because this isn’t real.
That said, the timing is a bit suspect. This hype starting up, followed by the GPT-4b micro release and the Axios piece, all in the span of ~3 days? And the hype men’s claims at least partially predicting the GPT-4b micro thing?
There’s three possibilities:
A coincidence. (The predictions weren’t very precise, just “innovators are coming”. The details about health-and-longevity and the innovative output got added after the GPT-4b piece, as far as I can tell.)
A leak in one of the newspapers working on the GPT-4b story (which the grifters then built a false narrative around).
Coordinated action by OpenAI.
One notable point is, the Axios story was surely coordinated with OpenAI, and it’s both full of shenanigans and references the Twitter hype (“several OpenAI staff have been telling friends they are both jazzed and spooked by recent progress”). So OpenAI was doing shenanigans. So I’m slightly inclined to believe it was all an OpenAI-orchestrated psyop.
Let’s examine this possibility.
Regarding the truth value of the claims: I think nothing has happened, even if the people involved are OpenAI-affiliated (in a different sense from how they claim). Maybe there was some slight unexpected breakthrough on an obscure research direction, at most, to lend an air of technical truth to those claims. But I think it’s all smoke and mirrors.
However, the psyop itself (if it were one) has been mildly effective. I think tons of people actually ended up believing that something might be happening (e. g., janus, the AI Notkilleveryoneism Memes guy, myself for a bit, maybe gwern, if his comment referenced the pattern of posting related to the early stages of this same event).
That said, as Eliezer points out here, it’s advantageous for OpenAI to be crying wolf: both to drive up/maintain hype among their allies, and to frog-boil the skeptics into instinctively dismissing any alarming claims. Such that, say, if there ever are actual whistleblowers pseudonymously freaking out about unexpected breakthroughs on Twitter, nobody believes them.
That said, I can’t help but think that if OpenAI were actually secure in their position and making insane progress, they would not have needed to do any of this stuff. If you’re closing your fingers around agents capable of displacing the workforce en masse, if you see a straight shot to AGI, why engage in this childishness? (Again, if Satoshi and Riley aren’t just random trolls.)
Bottom line, one of the following seems to be the case:
There’s a new type of guy, which is to AI/OpenAI what shitcoin-shills are to cryptocurrency.
OpenAI is engaging in galaxy-brained media psyops.
Oh, and what’s definitely true is that paying attention to what’s going viral on Twitter is a severe mistake. I’ve committed it for the first and last time.
I also suggest that you unroll the update you might’ve made based on Gwern’s comment. Not the part describing to the o-series’ potential – that’s of course plausible and compelling. The part where that potential seems to have already been confirmed and realized according to ostensible OpenAI leaks – because those leaks seem to be fake. (Unless Gwern was talking about some other demographic of OpenAI accounts being euphorically optimistic on Twitter, which I’ve somehow missed?)[1]
(Oh, as to Sam Altman meeting with Trump? Well, that’s probably because Trump’s Sinister Vizier, Sam Altman’s sworn nemesis, Elon Musk, is whispering in Trump ear 24⁄7 suggesting to crush OpenAI, and if Altman doesn’t seduce Trump ASAP, Trump will do that. Especially since OpenAI is currently vulnerable due to their legally dubious for-profit transition.
This planet is a clown show.)
I’m currently interested in:
Arguments for actually taking the AI hype people’s claims seriously. (In particular, were any actual OpenAI employees provably involved, and did I somehow miss them?)
Arguments regarding whether this was an OpenAI psyop vs. some random trolls.
Also, pinging @Zvi in case any of those events showed up on his radar and he plans to cover them in his newsletter.
Also, I can’t help but note that the people passing the comment around (such as this, this) are distorting it. The Gwern-stated claim isn’t that OpenAI are close to superintelligence, it’s that they may feel as if they’re close to superintelligence. Pretty big difference!
Though, again, even that is predicated on actual OpenAI employees posting actual insider information about actual internal developments. Which I am not convinced is a thing that is actually happening.
I personally put a relatively high probability of this being a galaxy brained media psyop by OpenAI/Sam Altman.
Eliezer makes a very good point that confusion around people claiming AI advances/whistleblowing benefits OpenAI significantly, and Sam Altman has a history of making galaxy brained political plays (attempting to get Helen fired (and then winning), testifying to congress that it is good he has oversight via the board and he should not be full control of OpenAI and then replacing the board with underlings, etc).
Sam is very smart and politically capable. This feels in character.
Thanks for doing this so I didn’t have to! Hell is other people—on social media. And it’s an immense time-sink.
Zvi is the man for saving the rest of us vast amounts of time and sanity.
I’d guess the psyop spun out of control with a couple of opportunistic posters pretending they had inside information, and that’s why Sam had to say lower your expectations 100x. I’m sure he wants hype, but he doesn’t want high expectations that are very quickly falsified. That would lead to some very negative stories about OpenAI’s prospects, even if they’re equally silly they’d harm investment hype.
The thing is—last time I heard about OpenAI rumors it was Strawberry.
That was part of my reasoning as well, why I thought it might be worth engaging with!
But I don’t think this is the same case. Strawberry/Q* was being leaked-about from more reputable sources, and it was concurrent with dramatic events (the coup) that were definitely happening.
In this case, all evidence we have is these 2-3 accounts shitposting.
Valid, I was split on whether it’s worth posting vs. it’d be just me taking my part in spreading this nonsense. But it’d seemed to me that a lot of people, including LW regulars, might’ve been fooled, so I erred on the side of posting.
As I’d said, I think he’s right about the o-series’ theoretic potential. I don’t think there is, as of yet, any actual indication that this potential has already been harnessed, and therefore that it works as well as the theory predicts. (And of course, the o-series scaling quickly at math is probably not even an omnicide threat. There’s an argument for why it might be – that the performance boost will transfer to arbitrary domains – but that doesn’t seem to be happening. I guess we’ll see once o3 is public.)
Alright, so I’ve been following the latest OpenAI Twitter freakout, and here’s some urgent information about the latest closed-doors developments that I’ve managed to piece together:
Following OpenAI Twitter freakouts is a colossal, utterly pointless waste of your time and you shouldn’t do it ever.
If you saw this comment of Gwern’s going around and were incredibly alarmed, you should probably undo the associated update regarding AI timelines (at least partially, see below).
OpenAI may be running some galaxy-brained psyops nowadays.
Here’s the sequence of events, as far as I can tell:
Some Twitter accounts that are (claiming, without proof, to be?) associated with OpenAI are being very hype about some internal OpenAI developments.
Gwern posts this comment suggesting an explanation for point 1.
Several accounts (e. g., one, two) claiming (without proof) to be OpenAI insiders start to imply that:
An AI model recently finished training.
Its capabilities surprised and scared OpenAI researchers.
It produced some innovation/is related to OpenAI’s “Level 4: Innovators” stage of AGI development.
Gwern’s comment goes viral on Twitter (example).
A news story about GPT-4b micro comes out, indeed confirming a novel OpenAI-produced innovation in biotech. (But it is not actually an “innovator AI”.)
The stories told by the accounts above start to mention that the new breakthrough is similar to GPT-4b: that it’s some AI model that produced an innovation in “health and longevity”. But also, that it’s broader than GPT-4b, and that the full breadth of this new model’s surprising emergent capabilities is unclear. (One, two, three.)
Noam Brown, an actual confirmed OpenAI researcher, complains about “vague AI hype on social media”, and states they haven’t yet actually achieved superintelligence.
The Axios story comes out, implying that OpenAI has developed “PhD-level superagents” and that Sam Altman is going to brief Trump on them. Of note:
Axios is partnered with OpenAI.
If you put on Bounded Distrust lens, you can see that the “PhD-level superagents” claim is entirely divorced from any actual statements made by OpenAI people. The article ties-in a Mark Zuckerberg quote instead, etc. Overall, the article weaves the impression it wants to create out of vibes (with which it’s free to lie) and not concrete factual statements.
The “OpenAI insiders” gradually ramp up the intensity of their story all the while, suggesting that the new breakthrough would allow ASI in “weeks, not years”, and also that OpenAI won’t release this “o4-alpha” until 2026 because they have a years-long Master Plan, et cetera. Example, example.
Sam Altman complains about “twitter hype” being “out of control again”.
OpenAI hype accounts deflate.
What the hell was all that?
First, let’s dispel any notion that the hype accounts are actual OpenAI insiders who know what they are talking about:
“Satoshi” claims to be blackmailing OpenAI higher-ups in order to be allowed to shitpost classified information on Twitter. I am a bit skeptical of this claim, to put it mildly.
“Riley Coyote” has a different backstory which is about as convincing by itself, and which also suggests that “Satoshi” is “Riley”’s actual source.
As far as I can tell digging into the timeline, both accounts just started acting as if they are OpenAI associates posting leaks. Not even, like, saying that they’re OpenAI associates posting leaks, much less proving that. Just starting to act as if they’re OpenAI associates and that everyone knows this. Their tweets then went viral. (There’s also the strawberry guy, who also implies to be an OpenAI insider, who also joined in on the above hype-posting, and who seems to have been playing this same game for a year now. But I’m tired of looking up the links, and the contents are intensely unpleasant. Go dig through that account yourself if you want.)
In addition, none of the OpenAI employee accounts with real names that I’ve been able to find have been participating in this hype cycle. So if OpenAI allowed its employees to talk about what happened/is happening, why weren’t any confirmed-identity accounts talking about it (except Noam’s, deflating it)? Why only the anonymous Twitter people?
Well, because this isn’t real.
That said, the timing is a bit suspect. This hype starting up, followed by the GPT-4b micro release and the Axios piece, all in the span of ~3 days? And the hype men’s claims at least partially predicting the GPT-4b micro thing?
There’s three possibilities:
A coincidence. (The predictions weren’t very precise, just “innovators are coming”. The details about health-and-longevity and the innovative output got added after the GPT-4b piece, as far as I can tell.)
A leak in one of the newspapers working on the GPT-4b story (which the grifters then built a false narrative around).
Coordinated action by OpenAI.
One notable point is, the Axios story was surely coordinated with OpenAI, and it’s both full of shenanigans and references the Twitter hype (“several OpenAI staff have been telling friends they are both jazzed and spooked by recent progress”). So OpenAI was doing shenanigans. So I’m slightly inclined to believe it was all an OpenAI-orchestrated psyop.
Let’s examine this possibility.
Regarding the truth value of the claims: I think nothing has happened, even if the people involved are OpenAI-affiliated (in a different sense from how they claim). Maybe there was some slight unexpected breakthrough on an obscure research direction, at most, to lend an air of technical truth to those claims. But I think it’s all smoke and mirrors.
However, the psyop itself (if it were one) has been mildly effective. I think tons of people actually ended up believing that something might be happening (e. g., janus, the AI Notkilleveryoneism Memes guy, myself for a bit, maybe gwern, if his comment referenced the pattern of posting related to the early stages of this same event).
That said, as Eliezer points out here, it’s advantageous for OpenAI to be crying wolf: both to drive up/maintain hype among their allies, and to frog-boil the skeptics into instinctively dismissing any alarming claims. Such that, say, if there ever are actual whistleblowers pseudonymously freaking out about unexpected breakthroughs on Twitter, nobody believes them.
That said, I can’t help but think that if OpenAI were actually secure in their position and making insane progress, they would not have needed to do any of this stuff. If you’re closing your fingers around agents capable of displacing the workforce en masse, if you see a straight shot to AGI, why engage in this childishness? (Again, if Satoshi and Riley aren’t just random trolls.)
Bottom line, one of the following seems to be the case:
There’s a new type of guy, which is to AI/OpenAI what shitcoin-shills are to cryptocurrency.
OpenAI is engaging in galaxy-brained media psyops.
Oh, and what’s definitely true is that paying attention to what’s going viral on Twitter is a severe mistake. I’ve committed it for the first and last time.
I also suggest that you unroll the update you might’ve made based on Gwern’s comment. Not the part describing to the o-series’ potential – that’s of course plausible and compelling. The part where that potential seems to have already been confirmed and realized according to ostensible OpenAI leaks – because those leaks seem to be fake. (Unless Gwern was talking about some other demographic of OpenAI accounts being euphorically optimistic on Twitter, which I’ve somehow missed?)[1]
(Oh, as to Sam Altman meeting with Trump? Well, that’s probably because Trump’s Sinister Vizier, Sam Altman’s sworn nemesis, Elon Musk, is whispering in Trump ear 24⁄7 suggesting to crush OpenAI, and if Altman doesn’t seduce Trump ASAP, Trump will do that. Especially since OpenAI is currently vulnerable due to their legally dubious for-profit transition.
This planet is a clown show.)
I’m currently interested in:
Arguments for actually taking the AI hype people’s claims seriously. (In particular, were any actual OpenAI employees provably involved, and did I somehow miss them?)
Arguments regarding whether this was an OpenAI psyop vs. some random trolls.
Also, pinging @Zvi in case any of those events showed up on his radar and he plans to cover them in his newsletter.
Also, I can’t help but note that the people passing the comment around (such as this, this) are distorting it. The Gwern-stated claim isn’t that OpenAI are close to superintelligence, it’s that they may feel as if they’re close to superintelligence. Pretty big difference!
Though, again, even that is predicated on actual OpenAI employees posting actual insider information about actual internal developments. Which I am not convinced is a thing that is actually happening.
I personally put a relatively high probability of this being a galaxy brained media psyop by OpenAI/Sam Altman.
Eliezer makes a very good point that confusion around people claiming AI advances/whistleblowing benefits OpenAI significantly, and Sam Altman has a history of making galaxy brained political plays (attempting to get Helen fired (and then winning), testifying to congress that it is good he has oversight via the board and he should not be full control of OpenAI and then replacing the board with underlings, etc).
Sam is very smart and politically capable. This feels in character.
It all started from Sam’s six words story. So it looks like as organized hype.
Thanks for doing this so I didn’t have to! Hell is other people—on social media. And it’s an immense time-sink.
Zvi is the man for saving the rest of us vast amounts of time and sanity.
I’d guess the psyop spun out of control with a couple of opportunistic posters pretending they had inside information, and that’s why Sam had to say lower your expectations 100x. I’m sure he wants hype, but he doesn’t want high expectations that are very quickly falsified. That would lead to some very negative stories about OpenAI’s prospects, even if they’re equally silly they’d harm investment hype.
There’s a possibility that this was a clown attack on OpenAI instead...
Thanks for the sleuthing.
The thing is—last time I heard about OpenAI rumors it was Strawberry.
The unfortunate fact of life is that too many times OpenAI shipping has surpassed all but the wildest speculations.
That was part of my reasoning as well, why I thought it might be worth engaging with!
But I don’t think this is the same case. Strawberry/Q* was being leaked-about from more reputable sources, and it was concurrent with dramatic events (the coup) that were definitely happening.
In this case, all evidence we have is these 2-3 accounts shitposting.
Thanks.
Well 2-3 shitposters and one gwern.
Who would be so foolish to short gwern? Gwern the farsighted, gwern the prophet, gwern for whom entropy is nought, gwern augurious augustus
I feel like for the same reasons, this shortform is kind of an engaging waste of my time. One reason I read LessWrong is to avoid twitter garbage.
Valid, I was split on whether it’s worth posting vs. it’d be just me taking my part in spreading this nonsense. But it’d seemed to me that a lot of people, including LW regulars, might’ve been fooled, so I erred on the side of posting.
I dont think any of that invalidates that Gwern is a usual, usually right.
As I’d said, I think he’s right about the o-series’ theoretic potential. I don’t think there is, as of yet, any actual indication that this potential has already been harnessed, and therefore that it works as well as the theory predicts. (And of course, the o-series scaling quickly at math is probably not even an omnicide threat. There’s an argument for why it might be – that the performance boost will transfer to arbitrary domains – but that doesn’t seem to be happening. I guess we’ll see once o3 is public.)
I think super human AI is inherently very easy. I can’t comment on the reliability of those accounts. But the technical claims seem plausible.