I think your model will underestimate the benefits of ramping up spending quickly today.
You model the size of the $ overhang as constant. But in fact it’s doubling every couple of years as global spending on producing on AI chips grows. (The overhang relates to the fraction of chips used in the largest training run, not the fraction of GWP spent on the largest training run.) That means that ramping up spending quickly (on training runs or software or hardware research) gives that $ overhang less time to grow
I think your model will underestimate the benefits of ramping up spending quickly today.
You model the size of the $ overhang as constant. But in fact it’s doubling every couple of years as global spending on producing on AI chips grows. (The overhang relates to the fraction of chips used in the largest training run, not the fraction of GWP spent on the largest training run.) That means that ramping up spending quickly (on training runs or software or hardware research) gives that $ overhang less time to grow
Interesting! I will see if I can correct that easily.