This is not nothing, but I still don’t expect 20-30 years (from first AGI, not from now!) out of this. There are three hypotheticals I see in this thread: (1) my understanding of Nathan Helm-Burger’s hypothetical, where “regulatory delay” means it’s frontier models in particular that are held back, possibly a computethreshold setup with a horizon/frontier distinction where some level of compute (frontier) triggers oversight, and then there is a next level of compute (horizon) that’s not allowed by default or at all; (2) the hypothetical from my response, where all AI research and drm-free GPUs are suppressed; and (3) my understanding of the hypothetical in your response, where only AI research is suppressed, but GPUs are not.
I think 20-30 years of uncontrollable GPU progress or collecting of old GPUs still overdetermines compute-feasible reinvention, even with fewer physically isolated enthusiasts continuing AI research in onionland. Some of those enthusiasts previously took part in a successful AGI project, leaking architectures that were experimentally demonstrated to actually work (the hypothetical starts at demonstration of AGI, not everyone involved will be on board with the subsequent secrecy). There is also the option of spending 10 years on a single training run.
This is not nothing, but I still don’t expect 20-30 years (from first AGI, not from now!) out of this. There are three hypotheticals I see in this thread: (1) my understanding of Nathan Helm-Burger’s hypothetical, where “regulatory delay” means it’s frontier models in particular that are held back, possibly a compute threshold setup with a horizon/frontier distinction where some level of compute (frontier) triggers oversight, and then there is a next level of compute (horizon) that’s not allowed by default or at all; (2) the hypothetical from my response, where all AI research and drm-free GPUs are suppressed; and (3) my understanding of the hypothetical in your response, where only AI research is suppressed, but GPUs are not.
I think 20-30 years of uncontrollable GPU progress or collecting of old GPUs still overdetermines compute-feasible reinvention, even with fewer physically isolated enthusiasts continuing AI research in onionland. Some of those enthusiasts previously took part in a successful AGI project, leaking architectures that were experimentally demonstrated to actually work (the hypothetical starts at demonstration of AGI, not everyone involved will be on board with the subsequent secrecy). There is also the option of spending 10 years on a single training run.