That assumption is still there because your interpretation is both not true and not justified by this analysis. As I’ve noted several times before, time-travel comparisons like this are useful for forecasting, but are not causal models of research: they cannot tell you the consequence of halting compute growth, because compute causes algorithmic progress. Algorithmic progress does not drop out of the sky by the tick-tock of a clock, it is the fruits of spending a lot of compute on a lot of experiments and trial-and-error and serendipity.
Unless you believe that it is possible to create that algorithmic progress in a void of pure abstract thought with no dirty trial-and-error of the sort which actually creates breakthroughs like Resnets or GPTs, then any breakdown like this relying on ‘compute used in a single run’ or ‘compute used in the benchmark instance’ simply represents a lower bound on the total compute spent to achieve that progress.
Once compute stagnates, so too will ‘algorithmic’ progress, because ‘algorithmic’ is just ‘compute’ in a trenchcoat. Only once the compute shows up, then the overflowing abundance of ideas will be able to be validated and show which one was a good algorithm after all; otherwise, it’s just Schmidhubering into a void and a Trivia Pursuit game like ‘oh, did you know resnets and Densenets were first invented in 1989 and they had shortcut connections well before that? too bad they couldn’t make any use of it then, what a pity’.
It sounds like you did not actually read my comment? I clearly addressed this exact point:
Yet it’s unlikely to slow down algorithmic progress much; algorithmic progress does use lots of compute (i.e. trying stuff out), but it uses lots of compute in many small runs, not big runs.
We are not talking here about a general stagnation of compute, we are talking about some kind of pause on large training runs. Compute will still keep getting cheaper.
If you are trying to argue that algorithmic progress only follows from unprecedently large compute runs, then (a) say that rather than strawmanning me as defending a view in which algorithmic progress is made without experimentation, and (b) that seems clearly false of the actual day-to-day experiments which go into algorithmic progress.
That assumption is still there because your interpretation is both not true and not justified by this analysis. As I’ve noted several times before, time-travel comparisons like this are useful for forecasting, but are not causal models of research: they cannot tell you the consequence of halting compute growth, because compute causes algorithmic progress. Algorithmic progress does not drop out of the sky by the tick-tock of a clock, it is the fruits of spending a lot of compute on a lot of experiments and trial-and-error and serendipity.
Unless you believe that it is possible to create that algorithmic progress in a void of pure abstract thought with no dirty trial-and-error of the sort which actually creates breakthroughs like Resnets or GPTs, then any breakdown like this relying on ‘compute used in a single run’ or ‘compute used in the benchmark instance’ simply represents a lower bound on the total compute spent to achieve that progress.
Once compute stagnates, so too will ‘algorithmic’ progress, because ‘algorithmic’ is just ‘compute’ in a trenchcoat. Only once the compute shows up, then the overflowing abundance of ideas will be able to be validated and show which one was a good algorithm after all; otherwise, it’s just Schmidhubering into a void and a Trivia Pursuit game like ‘oh, did you know resnets and Densenets were first invented in 1989 and they had shortcut connections well before that? too bad they couldn’t make any use of it then, what a pity’.
It sounds like you did not actually read my comment? I clearly addressed this exact point:
We are not talking here about a general stagnation of compute, we are talking about some kind of pause on large training runs. Compute will still keep getting cheaper.
If you are trying to argue that algorithmic progress only follows from unprecedently large compute runs, then (a) say that rather than strawmanning me as defending a view in which algorithmic progress is made without experimentation, and (b) that seems clearly false of the actual day-to-day experiments which go into algorithmic progress.