With effective compute for AI doubling more than once per year, a global 100% surtax on GPUs and AI ASICs seems like it would be a difference of only months to AGI timelines.
What is your source for the claim that effective compute for AI is doubling more than once per year? And do you mean effective compute in the largest training runs, or effective compute available in the world more generally?
“Effective compute” is the combination of hardware growth and algorithmic progress? If those are multiplicative rather than additive, slowing one of the factors may only accomplish little on its own, but maybe it could pave the way for more significant changes when you slow both at the same time?
Unfortunately, it seems hard to significantly slow algorithmic progress. I can think of changes to publishing behaviors (and improving security) and pausing research on scary models (for instance via safety evals). Maybe things like handicapping talent pools via changes to immigration policy, or encouraging capability researchers to do other work. But that’s about it.
Still, combining different measures could be promising if the effects are multiplicative rather than additive.
Edit: Ah, but I guess your point is that even a 100% tax on compute wouldn’t really change the slope of the compute growth curve – it would only move the curve rightward and delay a little. So we don’t get a multiplicative effect, unfortunately. We’d need to find an intervention that changes the steepness of the curve.
If the explicit goal of the regulation is to delay AI capabilities, and to implement that via taxes, seems like one could figure out something to make it longer. Also, a few months still seems quite helpful and would class as “substantially” in my mind.
With effective compute for AI doubling more than once per year, a global 100% surtax on GPUs and AI ASICs seems like it would be a difference of only months to AGI timelines.
What is your source for the claim that effective compute for AI is doubling more than once per year? And do you mean effective compute in the largest training runs, or effective compute available in the world more generally?
“Effective compute” is the combination of hardware growth and algorithmic progress? If those are multiplicative rather than additive, slowing one of the factors may only accomplish little on its own, but maybe it could pave the way for more significant changes when you slow both at the same time?
Unfortunately, it seems hard to significantly slow algorithmic progress. I can think of changes to publishing behaviors (and improving security) and pausing research on scary models (for instance via safety evals). Maybe things like handicapping talent pools via changes to immigration policy, or encouraging capability researchers to do other work. But that’s about it.
Still, combining different measures could be promising if the effects are multiplicative rather than additive.
Edit: Ah, but I guess your point is that even a 100% tax on compute wouldn’t really change the slope of the compute growth curve – it would only move the curve rightward and delay a little. So we don’t get a multiplicative effect, unfortunately. We’d need to find an intervention that changes the steepness of the curve.
If the explicit goal of the regulation is to delay AI capabilities, and to implement that via taxes, seems like one could figure out something to make it longer. Also, a few months still seems quite helpful and would class as “substantially” in my mind.