If you think there’s a 40% chance of a crash, then that’s quite the vig you’re allocating yourself on this bet at 1:7.
DAL
These are very poor odds, to the point that they seem to indicate a bullish rather than a bearish position on AI.
There’s definitely a better than 1 in 7 chance of a general market crash in the next year, given tariffs and recession risk (or, if you define crash loosely, we’ve already had one). Given that broader macro risk, merely 1 in 7 of an AI crash probably implies a forecast that AI will outperform the broader market.
If, for whatever reason, one is willing to disregard the macro risk, then there’s a lot more upside in just buying QQQ than taking your bet.
There’s a kind of paradox in all of these “straight line” extrapolation arguments for AI progress as your timelines assume (e.g., the argument for superhuman coding agents based on the rate of progress in the METR report).
One could extrapolate many different straight lines on graphs in the world right now (GDP, scientific progress, energy consumption, etc.). If we do create transformative AI within the next few years, then all of those straight lines will suddenly hit an inflection point. So, to believe in the straight line extrapolation of the AI line, you must also believe that almost no other straight lines will stay that way.
This seems to be the gut-level disagreement between those who feel the AGI and those who don’t; the disbelievers don’t buy that the AI line is straight and thus all the others aren’t.
I don’t know who’s right and who’s wrong in this debate, but the method of reasoning here reminds me of the viral tweet: “My 3-month-old son is now TWICE as big as when he was born. He’s on track to weigh 7.5 trillion pounds by age 10.” It could be true, but I have a fairly strong prior from nearly every other context that growth/progress tends to bend down into an S-curve at one point or another, and so these forecasts seems deeply suspect to me unless there’s some kind of better reason to suspect that trends will continue along the same path.
So, I certainly wouldn’t expect the AI companies to capture all the value; you’re right that competition drives the profits down. But, I also don’t think it’s reasonable to expect profits to get competed down to zero. Innovations in IT are generally pretty easy to replicate, technically speaking, but tech companies operate at remarkably high margins. Even at the moment, your various LLMs are similar but are not exact substitutes for one another, which gives each some market power.
Yea, fair enough. His prediction was: “I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
The second one is more hedged (“may be a world”) but “essentially all the code” must translate to a very large fraction of all the value even if that last 1% or whatever is of outsize economic significance.
The original statement is:
“I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code”
So, as I read that he’s not hedging on 90% in 3 to 6 months, but he is hedging on “essentially all” (99% or whatever that means) in a year.
I don’t doubt they need capital. And the Nigerian prince who needs $5,000 to claim the $100 million inheritance does too. It’s the fact that he/they can’t get capital at something coming anywhere close to the claimed value that’s suspicious.
Amodei is forecasting AI that writes 90% of code in three to six months according to this recent comments. Is Anthropic really burning cash so fast that they can’t wait a quarter, demonstrate to investors that AI has essentially solved software, and then raise at 10x the valuation?
If AI executives really are as bullish as they say they are on progress, then why are they willing to raise money anywhere in the ballpark of current valuations?
Dario Amodei suggested the other day that AI will take over all or nearly all coding working within months. Given that software is a multi-trillion dollar industry, how can you possibly square that statement with agreeing to raise money at a valuation for Anthropic in the mere tens of billions? And that’s setting aside any other value whatsoever for AI.
The whole thing sort of reminds me of the Nigerian prince scam (i.e., the Nigerian prince is coming into an inheritance of tens of millions of dollars but desperately needs a few thousand bucks to claim it, and will cut you in for incredible profit as a result) just scaled up a few orders of magnitude. Anthropic/OpenAI are on the cusp of technologies worth many trillions of dollars, but they’re so desperate for a couple billion bucks to get there that they’ll sell off big equity stakes at valuations that do not remotely reflect that supposedly certain future value.
It’s worth thinking through what today’s DeepSeek-induced, trillion dollar-plus drop in AI related stocks means.
There are two basic explanations for DeepSeek’s success training models with a lot less compute:
Imitation is Easy: DeepSeek is substantially just re-treading the same ground as the other players. They’re probably training on O1 outputs, etc. DeepSeek proves that it’s easy to match breakthroughs, but not to generate them. Further advances will still require tons of compute.
DeepSeek is really clever: Facing compute constraints, DeepSeek engineers were forced to find a better way to do work and they did. That clever will likely translate into forward progress, and there’s no reason it would be limited to imitation.
If #1 is true, then I think it implies that we’re headed towards a big slowdown in AI progress. The whole economic value proposition for building models just changed. If your frontier model can be imitated at a tiny fraction of the cost after a few months, what good is it? Why would VCs invest money in your training runs?
If #2 is true, then we may be headed towards incredibly rapid AI progress, and the odds of recursively self-improving AI are much higher. If what you really need to build better models is tons and tons of compute, then AI can’t speed itself up much. If what you need is just lots of cleverness, then it’s much easier to imagine a fast takeoff.
#1 is likely better for alignment in that it will slow things down from the current frenetic pace (the possible downside is that if you can imitate a cutting edge model cheaply and easily then hostile actors may deliberately build misaligned models).
#1 also seems to have big implications for government/legal involvement in AI. If the private sector loses interest in funding models that can be easily imitated, then further progress will tend to rely on either: government investment (as in basic science) or aggressive IP law that allows commercialization of progress by preventing imitators (as we do in drug development). Either of those means a much bigger role for the public sector.
This only works if you’re the only bookmaker in town. Even if your potential counterparties place their own subjective odds at 1:7, they won’t book action with you at 1:7 if they can get 1:5 somewhere else.
Perhaps I misread OP’s motivations, but presumably if you’re looking to make money on these kinds of forecasts, you’d just trade stocks. Sure, you can’t trade OpenAI per se, but there are lot of closely related assets and then you’re not stuck in the position of trying to collect on a bet you made with a stranger over the internet.
So, the function of offering such a “bet” is more as a signaling device about your beliefs. In which case, the signal being sent here is not really a bearish one.