Here’s a potential interpretation of the market’s apparent strange reaction to DeepSeek-R1 (shorting Nvidia).
I don’t fully endorse this explanation, and the shorting may or may not have actually been due to Trump’s tariffs + insider trading, rather than DeepSeek-R1. But I see a world in which reacting this way to R1 arguably makes sense, and I don’t think it’s an altogether implausible world.
If I recall correctly, the amount of money globally spent on inference dwarfs the amount of money spent on training. Most economic entities are not AGI labs training new models, after all. So the impact of DeepSeek-R1 on the pretraining scaling laws is irrelevant: sure, it did not show that you don’t need bigger data centers to get better base models, but that’s not where most of the money was anyway.
And my understanding is that, on the inference-time scaling paradigm, there isn’t yet any proven method of transforming arbitrary quantities of compute into better performance:
Reasoning models are bottlenecked on the length of CoTs that they’ve been trained to productively make use of. They can’t fully utilize even their context windows; the RL pipelines just aren’t up to that task yet. And if that bottleneck were resolved, the context-window bottleneck would be next: my understanding is that infinite context/”long-term” memories haven’t been properly solved either, and it’s unknown how they’d interact with the RL stage (probably they’d interact okay, but maybe not).
o3 did manage to boost its ARC-AGI and (maybe?) FrontierMath performance by… generating a thousand guesses and then picking the most common one...? But who knows how that really worked, and how practically useful it is. (See e. g. this, although that paper examines a somewhat different regime.)
Agents, from Devin to Operator to random open-source projects, are still pretty terrible. You can’t set up an ecosystem of agents in a big data center and let them rip, such that the ecosystem’s power scales boundlessly with the data center’s size. For all but the most formulaic tasks, you still need a competent human closely babysitting everything they do, which means you’re still mostly bottlenecked on competent human attention.
Suppose that you don’t expect the situation to improve: that the inference-time scaling paradigm would hit a ceiling pretty soon, or that it’d converge to distilling search into forward passes (such that the end users end up using very little compute on inference, like today), and that agents just aren’t going to work out the way the AGI labs promise.
In such a world, a given task can either be completed automatically by an AI for some fixed quantity of compute X, or it cannot be completed by an AI at all. Pouring ten times more compute on it does nothing.
In such a world, if it were shown that the compute needs of a task can be met with ten times less compute than previously expected, this would decrease the expected demand for compute.
The fact that capable models can be run locally might increase the number of people willing to use them (e. g., those very concerned about data privacy), as might the ability to automatically complete 10x as many trivial tasks. But it’s not obvious that this demand spike will be bigger than the simultaneous demand drop.
And I, at least, when researching ways to set up DeepSeek-R1 locally, found myself more drawn to the “wire a bunch of Macs together” option, compared to “wire a bunch of GPUs together” (due to the compactness). If many people are like this, it makes sense why Nvidia is down while Apple is (slightly) up. (Moreover, it’s apparently possible to run the full 671b-parameter version locally, and at a decent speed, using a pure RAM+CPU setup; indeed, it appears cheaper than mucking about with GPUs/Macs, just $6,000.)
This world doesn’t seem outright implausible to me. I’m bearish on agents and somewhat skeptical of inference-time scaling. And if inference-time scaling does deliver on its promises, it’ll likely go the way of search-and-distill.
On balance, I don’t actually expect the market to have any idea what’s going on, so I don’t know that its reasoning is this specific flavor of “well-informed but skeptical”. And again, it’s possible the drop was due to Trump, nothing to do with DeepSeek at all.
But as I’d said, this reaction to DeepSeek-R1 does not seem necessarily irrational/incoherent to me.
The bet that “makes sense” is that quality of Claude 3.6 Sonnet, GPT-4o and DeepSeek-V3 is the best that we’re going to get in the next 2-3 years, and DeepSeek-V3 gets it much cheaper (less active parameters, smaller margins from open weights), also “suggesting” that quality is compute-insensitive in a large range, so there is no benefit from more compute per token.
But if quality instead improves soon (including by training DeepSeek-V3 architecture on GPT-4o compute), and that improvement either makes it necessary to use more compute per token, or motivates using inference for more tokens even with models that have the same active parameter count (as in Jevons paradox), that argument doesn’t work. Also, the ceiling of quality at the possible scaling slowdown point depends on efficiency of training (compute multiplier) applied to the largest training system that the AI economics will support (maybe 5-15 GW without almost-AGI), and improved efficiency of DeepSeek-V3 raises that ceiling.
Here’s a potential interpretation of the market’s apparent strange reaction to DeepSeek-R1 (shorting Nvidia).
I don’t fully endorse this explanation, and the shorting may or may not have actually been due to Trump’s tariffs + insider trading, rather than DeepSeek-R1. But I see a world in which reacting this way to R1 arguably makes sense, and I don’t think it’s an altogether implausible world.
If I recall correctly, the amount of money globally spent on inference dwarfs the amount of money spent on training. Most economic entities are not AGI labs training new models, after all. So the impact of DeepSeek-R1 on the pretraining scaling laws is irrelevant: sure, it did not show that you don’t need bigger data centers to get better base models, but that’s not where most of the money was anyway.
And my understanding is that, on the inference-time scaling paradigm, there isn’t yet any proven method of transforming arbitrary quantities of compute into better performance:
Reasoning models are bottlenecked on the length of CoTs that they’ve been trained to productively make use of. They can’t fully utilize even their context windows; the RL pipelines just aren’t up to that task yet. And if that bottleneck were resolved, the context-window bottleneck would be next: my understanding is that infinite context/”long-term” memories haven’t been properly solved either, and it’s unknown how they’d interact with the RL stage (probably they’d interact okay, but maybe not).
o3 did manage to boost its ARC-AGI and (maybe?) FrontierMath performance by… generating a thousand guesses and then picking the most common one...? But who knows how that really worked, and how practically useful it is. (See e. g. this, although that paper examines a somewhat different regime.)
Agents, from Devin to Operator to random open-source projects, are still pretty terrible. You can’t set up an ecosystem of agents in a big data center and let them rip, such that the ecosystem’s power scales boundlessly with the data center’s size. For all but the most formulaic tasks, you still need a competent human closely babysitting everything they do, which means you’re still mostly bottlenecked on competent human attention.
Suppose that you don’t expect the situation to improve: that the inference-time scaling paradigm would hit a ceiling pretty soon, or that it’d converge to distilling search into forward passes (such that the end users end up using very little compute on inference, like today), and that agents just aren’t going to work out the way the AGI labs promise.
In such a world, a given task can either be completed automatically by an AI for some fixed quantity of compute X, or it cannot be completed by an AI at all. Pouring ten times more compute on it does nothing.
In such a world, if it were shown that the compute needs of a task can be met with ten times less compute than previously expected, this would decrease the expected demand for compute.
The fact that capable models can be run locally might increase the number of people willing to use them (e. g., those very concerned about data privacy), as might the ability to automatically complete 10x as many trivial tasks. But it’s not obvious that this demand spike will be bigger than the simultaneous demand drop.
And I, at least, when researching ways to set up DeepSeek-R1 locally, found myself more drawn to the “wire a bunch of Macs together” option, compared to “wire a bunch of GPUs together” (due to the compactness). If many people are like this, it makes sense why Nvidia is down while Apple is (slightly) up. (Moreover, it’s apparently possible to run the full 671b-parameter version locally, and at a decent speed, using a pure RAM+CPU setup; indeed, it appears cheaper than mucking about with GPUs/Macs, just $6,000.)
This world doesn’t seem outright implausible to me. I’m bearish on agents and somewhat skeptical of inference-time scaling. And if inference-time scaling does deliver on its promises, it’ll likely go the way of search-and-distill.
On balance, I don’t actually expect the market to have any idea what’s going on, so I don’t know that its reasoning is this specific flavor of “well-informed but skeptical”. And again, it’s possible the drop was due to Trump, nothing to do with DeepSeek at all.
But as I’d said, this reaction to DeepSeek-R1 does not seem necessarily irrational/incoherent to me.
The bet that “makes sense” is that quality of Claude 3.6 Sonnet, GPT-4o and DeepSeek-V3 is the best that we’re going to get in the next 2-3 years, and DeepSeek-V3 gets it much cheaper (less active parameters, smaller margins from open weights), also “suggesting” that quality is compute-insensitive in a large range, so there is no benefit from more compute per token.
But if quality instead improves soon (including by training DeepSeek-V3 architecture on GPT-4o compute), and that improvement either makes it necessary to use more compute per token, or motivates using inference for more tokens even with models that have the same active parameter count (as in Jevons paradox), that argument doesn’t work. Also, the ceiling of quality at the possible scaling slowdown point depends on efficiency of training (compute multiplier) applied to the largest training system that the AI economics will support (maybe 5-15 GW without almost-AGI), and improved efficiency of DeepSeek-V3 raises that ceiling.