Intuitively, it seems that output tokens should be more expensive. The autoregressive model has to run once for each output token, and as these runs progress, output tokens gradually become a part of the input (so the last token is generated with context being all input and almost all output).
I agree with the intuition, but I think that’s where I am confused. Thanks to the KV cache we do not run the new input sequence (previous sequence + last generated token) through the encoders (as we do for the input sequence during prefill). It’s all cached (from prefill + from the last token generation for that sequence+token). So… I don’t know—it doesn’t feel like the output tokens are more expensive in this case (you run “once”, the same way as you run “once” for every input token)?
I think they do amortize their costs among all uses. A number of runs (number of output tokens) multiplied by a (varying) cost of the each run is unlikely to be close to linear.
Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)
it doesn’t feel like the output tokens are more expensive in this case (you run “once”, the same way as you run “once” for every input token)?
One has to run the whole Transformer once for an output token. So if we ignore the difference between runs, the complexity would be the number of output tokens multiplied by the cost of a single run.
Now, what is the cost of the run, and how does it vary?
The context to that run is sometimes all inputs, sometimes all inputs and almost all outputs, and sometimes something in between. If we disregard the fact that output tokens are only included in contexts of some of the runs, then input and output tokens should contribute similarly to the cost of the typical run (although, in reality output tokens contribute less, because a typical output token only participates in approximately half of those runs, with early output tokens participating in almost all runs and late output tokens participating in almost no runs). I am not sure how things like KV caching affect that. So a typical output token would contribute about a half of the contribution of an input token to the cost of a single run.
(I would assume that a clever caching schema would eliminate or reduce the difference between input and output tokens in the sense of their contribution to the cost of a single run.)
But the number of runs is exactly the number of output tokens, so the overall cost seems to grow much faster when the number of output tokens grows.
Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)
No, the typical pricing tends to be the same (unless one can score a discounted plan).
But some queries might bring more profit, some queries might bring less profit, and some queries might be at a loss, depending on the query. The provider is OK with non-uniform margin on the queries and with dependency of that margin on the number of input tokens, the number of output tokens, and so on. They care about average payments vs average costs.
(Similarly, with flat monthly fee in interfaces like ChatGPT, they make more money off light users, and might even be OK with taking a loss with the heaviest users as long as there are not too many of them.)
They just made an experimental “long output” up to 64K output tokens per request available for “alpha users”, and here was what they did for pricing https://openai.com/gpt-4o-long-output/:
Long completions are more costly from an inference perspective, so the per-token pricing of this model is increased to match the costs.
Output tokens certainly do not scale linearly, even with a KV cache. The KV cache means you don’t need to recompute the k/q/v vectors for each of the previous tokens, but you still need to compute n kq dot products for the (n+1)’st token.
Thanks for the answer, I appreciate it!
I agree with the intuition, but I think that’s where I am confused. Thanks to the KV cache we do not run the new input sequence (previous sequence + last generated token) through the encoders (as we do for the input sequence during prefill). It’s all cached (from prefill + from the last token generation for that sequence+token). So… I don’t know—it doesn’t feel like the output tokens are more expensive in this case (you run “once”, the same way as you run “once” for every input token)?
Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)
One has to run the whole Transformer once for an output token. So if we ignore the difference between runs, the complexity would be the number of output tokens multiplied by the cost of a single run.
Now, what is the cost of the run, and how does it vary?
The context to that run is sometimes all inputs, sometimes all inputs and almost all outputs, and sometimes something in between. If we disregard the fact that output tokens are only included in contexts of some of the runs, then input and output tokens should contribute similarly to the cost of the typical run (although, in reality output tokens contribute less, because a typical output token only participates in approximately half of those runs, with early output tokens participating in almost all runs and late output tokens participating in almost no runs). I am not sure how things like KV caching affect that. So a typical output token would contribute about a half of the contribution of an input token to the cost of a single run.
(I would assume that a clever caching schema would eliminate or reduce the difference between input and output tokens in the sense of their contribution to the cost of a single run.)
But the number of runs is exactly the number of output tokens, so the overall cost seems to grow much faster when the number of output tokens grows.
No, the typical pricing tends to be the same (unless one can score a discounted plan).
But some queries might bring more profit, some queries might bring less profit, and some queries might be at a loss, depending on the query. The provider is OK with non-uniform margin on the queries and with dependency of that margin on the number of input tokens, the number of output tokens, and so on. They care about average payments vs average costs.
(Similarly, with flat monthly fee in interfaces like ChatGPT, they make more money off light users, and might even be OK with taking a loss with the heaviest users as long as there are not too many of them.)
Thanks. I think I get it now. (at least one of) my confusion was something between confusing a “transformer run” and “number of FLOPS”.
And I get the thing about cost, that’s what I meant but I articulated it poorly.
An extra recent observation point: currently GPT-4o cost is $5.00 / 1M input tokens and $15.00 / 1M output tokens https://openai.com/api/pricing/
They just made an experimental “long output” up to 64K output tokens per request available for “alpha users”, and here was what they did for pricing https://openai.com/gpt-4o-long-output/:
Interesting, thanks!
Output tokens certainly do not scale linearly, even with a KV cache. The KV cache means you don’t need to recompute the k/q/v vectors for each of the previous tokens, but you still need to compute n kq dot products for the (n+1)’st token.