But don’t the additional GPU requirements apply equally to training and inference? If that’s the case, then the number of inference instances that can be run on training hardware (post-training) will still be on the order of 1e6.
Not for transformers, for which training and inference are fundamentally different.
Transformer training parallelizes over time, but that isn’t feasible for inference. So transformer inference backends have to parallelize over batch/space, just like RNNs, which is enormously less efficient in RAM and RAM bandwidth use.
So if you had a large attention model that uses say 1TB of KV cache (fast weights) and 1TB of slow weights, transformer training can often run near full efficiency, flop limited, parallelizing over time.
But similar full efficient transformer inference would require running about K instances/agents in parallel, where K is the flop/mem_bw ratio (currently up to 1000 on H100). So that would be 1000 * 1TB of RAM for the KV cache (fast weights) as its unique per agent instance.
This, in a nutshell, is part of why we don’t already have AGI. Transformers are super efficient at absorbing book knowledge, but just as inefficient as RNNs at inference (generating new experiences, which is a key bottleneck on learning from experience).
Thus there is of course much research in more efficient long kv cache, tree/graph inference that can share some of the KV cache over similar branching agents, etc
In practice, throughput for generating tokens is only perhaps 3-10x worse than reading (input/prompt) tokens. This is true even while optimizing for latency on generation (rather than throughput).
(This is for well optimized workloads: additional inference optimizations are needed for generation.)
For instance, see the pricing on various APIs. (OpenAI has output 3x cheaper than input, Anthropic has 5x cheap input than output.)
I’m skeptical this will change importantly with future larger models.
Input vs output tokens are both unique per agent history (prompt + output), so that differentiation doesn’t matter for my core argument about the RAM constraint. If you have a model which needs 1TB of KV cache, and you aren’t magically sharing that significantly between instances, then you’ll need at least 1000 * 1TB of RAM to run 1000 inferences in parallel.
The 3x − 10x cost ratio model providers charge is an economic observation that tells us something about the current cost vs utility tradeoffs, but it’s much complicated by oversimpliciation of the current pricing models (they are not currently charging their true costs, probably because that would be too complicated, but also perhaps reveal too much information—their true cost would be more like charging rent on RAM for every timestep). It just tells you that very roughly, that on average, the mean (averaged over many customer requests) flop utilization of the generation phase (parallel over instances) is perhaps 3x to 10x lower than the prefill phase (parallel over time) - but it doesn’t directly tell you why.
This is all downstream dependent on model design and economics. There are many useful requests that LLMs can fulfill without using barely any KV cache—essentially all google/oracle type use cases where you are just asking the distilled wisdom of the internet a question. If those were all of the request volume, then the KV cache RAM per instance would be inconsequential, inference batch sizes would be > 1000, inference flop utilization would be the same for prefill vs generation, and providers would charge the same price for input vs output tokens.
On the other extreme, if all requests used up the full training context window, then the flop utilization of inference would be constrained by approximately (max_KV_cache_RAM + weight_RAM / max_KV_cache_RAM ) / alu_ratio. For example if the KV cache is 10% of RAM, and alu_ratio is 1000:1, generation would have max efficiency of 1%. If infill efficiency was 30%, then output tokens would presumably be priced 30x more than input tokens.
So the observed input:output token pricing is dependent on the combination of KV_cache RAM fraction (largely a model design decision), current efficiency of implementations of infill vs generation, and most importantly—the distribution of request prompt lengths, which itself is dependent on the current economic utility of shorter vs longer prompts for current models.
In practice most current models have a much smaller KV cache to weight RAM fraction than my simple 1:1 example, but the basic point holds: training is more flop & interconnect limited, inference is more RAM and ram bw limited. But these constraints already shape the design space of models and how they are deployed.
LLMs currently excel at anything a human knowledge worker can do without any specific training (minimal input prompt length), but largely aren’t yet competitive with human experts at most real world economic tasks that require significant unique per-job training. Coding is a good example—human thoughtspeed is roughly 9 token/s, or 32K/hour, or 256K per 8 hour work day, or roughly 1M tokens per week.
Current GPT4-turbo (one of the current leaders for coding), for example, has a max context length of 128K (roughly 4 hours). But if you actually use all of that for each request for typical coding requests that generate say 1K of useful output (equivalent to a few minutes of human thought), that will cost you about $1.25 for the input tokens, but only about $0.03 for the output tokens. That costs about as much as a human worker, per minute of output thought tokens. The cost of any LLM agent today (per minute of output thought) increases linearily with input prompt length—ie the agent’s unique differentiating short term memory. Absent more sophisticated algorithms, the cost of running a react-like LLM agent thus grows quadratically with time, vs linear for humans (because each small observe-act time step has cost proportional to input context length, which grows per time step).
Human programmers aren’t being replaced en masse (yet) in part because current models aren’t especially smarter than humans at equivalent levels of job-specific knowledge/training.
Normalized for similar ability, LLMs currently are cheaper than humans at most any knowledge work that requires very little job-specific knowledge/training, and much more expensive than humans for tasks that require extensive job-specific knowledge/training—and this has everything to do with how transformers currently consume and utilize VRAM.
These are good points.
But don’t the additional GPU requirements apply equally to training and inference? If that’s the case, then the number of inference instances that can be run on training hardware (post-training) will still be on the order of 1e6.
Not for transformers, for which training and inference are fundamentally different.
Transformer training parallelizes over time, but that isn’t feasible for inference. So transformer inference backends have to parallelize over batch/space, just like RNNs, which is enormously less efficient in RAM and RAM bandwidth use.
So if you had a large attention model that uses say 1TB of KV cache (fast weights) and 1TB of slow weights, transformer training can often run near full efficiency, flop limited, parallelizing over time.
But similar full efficient transformer inference would require running about K instances/agents in parallel, where K is the flop/mem_bw ratio (currently up to 1000 on H100). So that would be 1000 * 1TB of RAM for the KV cache (fast weights) as its unique per agent instance.
This, in a nutshell, is part of why we don’t already have AGI. Transformers are super efficient at absorbing book knowledge, but just as inefficient as RNNs at inference (generating new experiences, which is a key bottleneck on learning from experience).
Thus there is of course much research in more efficient long kv cache, tree/graph inference that can share some of the KV cache over similar branching agents, etc
In practice, throughput for generating tokens is only perhaps 3-10x worse than reading (input/prompt) tokens. This is true even while optimizing for latency on generation (rather than throughput).
(This is for well optimized workloads: additional inference optimizations are needed for generation.)
For instance, see the pricing on various APIs. (OpenAI has output 3x cheaper than input, Anthropic has 5x cheap input than output.)
I’m skeptical this will change importantly with future larger models.
Input vs output tokens are both unique per agent history (prompt + output), so that differentiation doesn’t matter for my core argument about the RAM constraint. If you have a model which needs 1TB of KV cache, and you aren’t magically sharing that significantly between instances, then you’ll need at least 1000 * 1TB of RAM to run 1000 inferences in parallel.
The 3x − 10x cost ratio model providers charge is an economic observation that tells us something about the current cost vs utility tradeoffs, but it’s much complicated by oversimpliciation of the current pricing models (they are not currently charging their true costs, probably because that would be too complicated, but also perhaps reveal too much information—their true cost would be more like charging rent on RAM for every timestep). It just tells you that very roughly, that on average, the mean (averaged over many customer requests) flop utilization of the generation phase (parallel over instances) is perhaps 3x to 10x lower than the prefill phase (parallel over time) - but it doesn’t directly tell you why.
This is all downstream dependent on model design and economics. There are many useful requests that LLMs can fulfill without using barely any KV cache—essentially all google/oracle type use cases where you are just asking the distilled wisdom of the internet a question. If those were all of the request volume, then the KV cache RAM per instance would be inconsequential, inference batch sizes would be > 1000, inference flop utilization would be the same for prefill vs generation, and providers would charge the same price for input vs output tokens.
On the other extreme, if all requests used up the full training context window, then the flop utilization of inference would be constrained by approximately (max_KV_cache_RAM + weight_RAM / max_KV_cache_RAM ) / alu_ratio. For example if the KV cache is 10% of RAM, and alu_ratio is 1000:1, generation would have max efficiency of 1%. If infill efficiency was 30%, then output tokens would presumably be priced 30x more than input tokens.
So the observed input:output token pricing is dependent on the combination of KV_cache RAM fraction (largely a model design decision), current efficiency of implementations of infill vs generation, and most importantly—the distribution of request prompt lengths, which itself is dependent on the current economic utility of shorter vs longer prompts for current models.
In practice most current models have a much smaller KV cache to weight RAM fraction than my simple 1:1 example, but the basic point holds: training is more flop & interconnect limited, inference is more RAM and ram bw limited. But these constraints already shape the design space of models and how they are deployed.
LLMs currently excel at anything a human knowledge worker can do without any specific training (minimal input prompt length), but largely aren’t yet competitive with human experts at most real world economic tasks that require significant unique per-job training. Coding is a good example—human thoughtspeed is roughly 9 token/s, or 32K/hour, or 256K per 8 hour work day, or roughly 1M tokens per week.
Current GPT4-turbo (one of the current leaders for coding), for example, has a max context length of 128K (roughly 4 hours). But if you actually use all of that for each request for typical coding requests that generate say 1K of useful output (equivalent to a few minutes of human thought), that will cost you about $1.25 for the input tokens, but only about $0.03 for the output tokens. That costs about as much as a human worker, per minute of output thought tokens. The cost of any LLM agent today (per minute of output thought) increases linearily with input prompt length—ie the agent’s unique differentiating short term memory. Absent more sophisticated algorithms, the cost of running a react-like LLM agent thus grows quadratically with time, vs linear for humans (because each small observe-act time step has cost proportional to input context length, which grows per time step).
Human programmers aren’t being replaced en masse (yet) in part because current models aren’t especially smarter than humans at equivalent levels of job-specific knowledge/training.
Normalized for similar ability, LLMs currently are cheaper than humans at most any knowledge work that requires very little job-specific knowledge/training, and much more expensive than humans for tasks that require extensive job-specific knowledge/training—and this has everything to do with how transformers currently consume and utilize VRAM.