Research coordinator of Stop/Pause area at AI Safety Camp.
See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Research coordinator of Stop/Pause area at AI Safety Camp.
See explainer on why AGI could not be controlled enough to stay safe:
lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Thanks, I might be underestimating the impact of new Blackwell chips with improved computation.
I’m skeptical whether offering “chain-of-thought” bots to more customers will make a significant difference. But I might be wrong – especially if new model architectures would come out as well.
And if corporations throw enough cheap compute behind it plus widespread personal data collection, they can get to commercially very useful model functionalities. My hope is that there will be a market crash before that could happen, and we can enable other concerned communities to restrict the development and release of dangerously unscoped models.
But even then, OpenAI might get to ~$25bn annualized revenue that won’t be going away
What is this revenue estimate assuming?
This is a neat and specific explanation of how I approached it. I tried to be transparent about it though.
If your bet is that something special about the economics of AI will cause it to crash, maybe your bet should be changed to this?
What’s relevant for me is that there is an AI market crash, such that AI corporations have weakened and we in turn have more leeway to restrict their reckless activities. Practically, I don’t mind if that’s actually the result of a wider failing economy – I mentioned a US recession as a causal factor here.
Having said that, it would be easier to restrict AI corp activities when there is not a general market crash at the same time (since the latter would make it harder to fund organisers as well as for working citizens to mobilise).
PS: I don’t exactly have $25k to bet, and I’ve said elsewhere I do believe there’s a big chance that AI spending will decrease.
Understood! And I appreciate you discussing thoughts with me here.
Another thought is that changes in the amount of investment may swing further than changes in the value...?
Interesting point! That feels right, but I lack experience/clarity about how investments work here.
That’s a good distinction.
I want to take you up on measuring actual inflows of capital into the large-AI-model development companies. Rather than e.g. measuring the prices of stocks in companies leading on development – where declines may not much reflect an actual reduction in investment and spending on AI products.
Consumers and enterprises cutting back on their subscriptions and private investors cutting back on their investment offers and/or cancelling previous offers – those seem reliable indicators of an actual crash.
It’s plausible that a general market crash feeds into, and is reflective of, worsening economics of the AI companies. So it seems hard to decouple causation there. And, I’d still call it an AI market crash even if investment/valuations/investments are going down to a similar extent in other industries. So I would not try to control for other market declines happening around the same time, but your suggested indicators make sense!
For sure! Proceeds go to organisers who can act to legitimately restrict the weakened AI companies.
(Note that with a crash I don’t just mean some large reduction in the stock prices of tech companies that have been ‘leading’ on AI. I mean a broad-based reduction in the investments and/or customer spending going into the AI companies.)
Maybe I’m banking too much on some people in the AI Safety community keep thinking that AI “progress” will continue as a rapid upward curve :)
Elsewhere I posted a guess of 40% chance of an AI market crash for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it’s judged by a few measures, rather than my sense of “that looks like a crash”.
Thanks, I hadn’t seen that graph yet! I had only searched Manifold.
The odds of 1:7 imply a 12.5% chance of a crash. That’s far outside of the consensus on that graph. Though I also notice that their criteria for a “bust or winter” are much stricter than where I’d set the threshold for a crash.
That makes me wonder whether I should have selected a lower odd ratio (for a higher return on the upside). Regardless, this month I’m prepared to take this bet.
but calling this “near free money” when you have to put up 25k to get it...
Fair enough – you’d have to set aside this amount in your savings. You could still earn some interest from the bank, but that’s not much.
I think the US is in a recession now, and that the AI market has a ~40% chance of crashing with it this year.
This is a solid point that I forgot to take into account here.
What happens to GPU clusters inside the data centers build out before the market crash?
If user demand slips and/or various companies stop training, that means that compute prices will slump. As a result, cheap compute will be available for remaining R&D teams, for the three years at least that the GPUs last.
I find that concerning. Because not only is compute cheap, but many of the researchers left using that compute will have reached an understanding that scaling transformer architectures on internet-available data has become a dead end. With investor and managerial pressure to release LLM-based products gone, researchers will explore their own curiosities. This is the time you’d expect the persistent researchers to invent and tinker with new architectures – that could end up being more compute and data efficient at encoding functionality.
~ ~ ~
I don’t want to skip over your main point. Is your argument that AI companies will be protected from a crash, since their core infrastructure is already build?
Or more precisely:
that since data centers were build out before the crash, that compute prices end up converging on mostly just the cost of the energy and operations needed to run the GPU clusters inside,
which in turn acts as a financial cushion for companies like OpenAI and Anthropic, for whom inference costs are now lower,
where those companies can quickly scale back expensive training and R&D, while offering their existing products to remaining users at lower cost.
as a result of which, those companies can continue to operate during the period that funding has dried up, waiting out the ‘AI winter’ until investors and consumers are willing to commit their money again.
That sounds right, given that compute accounts for over half of their costs. Particularly if the companies secure another large VC round ahead of a crash, then they should be able to weather the storm. E.g. the $40 billion just committed to OpenAI (assuming that by the end of this year OpenAI exploits a legal loophole to become for-profit, that their main backer SoftBank can lend enough money, etc).
Just realised that your point seems similar to Sequoia Capital’s:
“declining prices for GPU computing is actually good for long-term innovation and good for startups. If my forecast comes to bear, it will cause harm primarily to investors. Founders and company builders will continue to build in AI—and they will be more likely to succeed, because they will benefit both from lower costs and from learnings accrued during this period of experimentation.”
~ ~ ~
A market crash is by itself not enough to deter these companies – from continuing to integrate increasingly automated systems into society.
I think a coordinated movement is needed; one that exerts legitimate pressure on our failing institutions. The next post will be about that.
Glad to read your thoughts!
Agreed on being friends with communities who are not happy about AI.
I’m personally not a fan of working with OpenAI or Anthropic, given that they’ve defected on people here concerned about a default trajectory to mass extinction, and used our research for their own ends.
Yes, I get you don’t just want to read about the problem but a potential solution.
The next post in this sequence will summarise the plan by those experienced organisers.
These organisers led one of the largest grassroots movements in recent history. That took years of coalition building, and so will building a new movement.
So they want to communicate the plan clearly, without inviting misinterpretations down the line. I myself rushed writing on new plans before (when I nuanced a press release put out by a time-pressed colleague at Stop AI). That backfired because I hadn’t addressed obvious concerns. This time, I drafted a summary that the organisers liked, but still want to refine. So they will run sessions with me and a facilitator, to map out stakeholders and their perspectives, before going public on plans.
Check back here in a month. We should have a summary ready by then.
Thanks for your takes! Some thoughts on your points:
Yes, OpenAI has useful infrastructure and brands. It’s hard to imagine a scenario where they wouldn’t just downsize and/or be acquired by e.g. Microsoft.
If OpenAI or Anthropic goes down like that, I’d be surprised if some other AI companies don’t go down with them. This is an industry that very much relies on stories convincing people to buy into the promise of future returns, given that most companies are losing money on developing and releasing large models. When those stories fail to play out with an industry leader, the common awareness of that failure will cascade into people dropping their commitments throughout the industry.
AI companies may fail in part because people stop using their products. For example, if a US recession happens, paid users may switch to cheaper alternatives like DeepSeek’s, or stop using the tools altogether. Also, ChatGPT started as a flashy product that relied on novelty and future promises to get people excited to use it. After a while, people get bored of a product that isn’t changing much anymore, and is not actually delivering on OpenAI’s proclamations of how AI will rapidly improve.
Sure, companies fund interesting research. At the same time, do you know other examples of $600 billion+ being invested yearly into interesting research without expectations of much profit?
Other communities I’m in touch with are already outraged about the AI thing. This includes creative professionals, tech privacy advocates, families targeted by deepfakes, tech-aware environmentalists, some Christians, and so forth. More broadly, there has been growing public frustration about tech oligarchs extracting wealth while taking over the government, about a ‘rot economy’ that pushes failing products, about algorithmic intermediaries creating a sense of disconnection, and about a lack of stable dignified jobs. ‘AI’ is at the intersection of all of those problems, and therefore become a salient symbol for communities to target. An AI market crash, alongside other correlated events, can bring to surface and magnify their frustrations.
Those are my takes. Curious if this raises new thoughts.
Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.
I can’t speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.
Yes, good point. There is a discussion of that here.
Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.
but I’m not sure where that money goes.
The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.
But I’m not sure why the article says that “every single paying customer” only increases the company’s burn rate given that they spend less money running the models than they get in revenue.
I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles for any further reasoning.
Looking at the cost items in The Information’s table, revenue share with Microsoft ($700 million) and hosting ($400 million) definitely seem mostly variable with subscriptions. It’s harder to say for the sales & marketing ($300 million) and general administrative costs ($600 million).
Given that information, the revenue that OpenAI earns for itself would still be higher than just the cost of running the models and hosting (which we could call the “cost of running software”).
It’s hard to say though on the margin how much cost overall is added per normal-tier user added. Partly, it depends on how much more they use OpenAI’s tools than free users. But I guess you’d be more likely right than not that (if we exclude past training and research compute costs, and other fixed costs), that the overall revenue per normal-tier user added would be higher than the accompanying costs.
Now the article claims that OpenAI spent $9 billion in total
Note also that the $9 billion total cost amount seems understated in three ways:
the amortised research compute amount lags behind the recent higher compute costs for research.
the data costs (which I assume are not variable with user count) do not appear to price in possible compensation OpenAI will be ordered to pay for any past violations identified in ongoing lawsuits.
from an investor perspective, the $1.5 billion(?) worth in profit shares handed out are also a ‘cost’.
Thanks, I got to say I’m a total amateur when it comes to GPU performance. So will take the time to read your linked-to comment to understand it better.