I model this as an exponential distribution with a mean time of 5 years.
I just want to point out that one of the big hopes for initial survival of AGI is a worldwide regulatory delay in ‘cranking it up’. Thus, AGI could exist in containment in labs but not be being used at ‘full power’ on any practical projects like solving aging or digitizing humans. In this scenario, I’m not sure whether you would count this as AGI not invented yet, since it’s not truly in use, or whether you’d call this invented but survival still unclear since if it got loose we’d all die, or what. Basically, I want to bring up the possibility of a ‘fuzzy post-invention time full of looming danger, spanning perhaps 20-30 years.’
worldwide regulatory delay [...] ‘fuzzy post-invention time full of looming danger, spanning perhaps 20-30 years.’
I expect something similar, but not to the extent of 20-30 years from regulatory delay alone. That would take banning research and drm-free GPU manufacturing as well, possibly making it taboo on the cultural level worldwide, and confiscating and destroying existing GPUs (those without inbuilt compute governance features). Otherwise anyone can find dozens of GPUs and a LLaMA to apply the latest algorithmic developments, even when doing so is technically illegal. Preventing small projects is infeasible without a pivotal act worth of change.
I don’t know. Even a technical illegality makes it really hard to start an institution like openAI, where you openly have a large budget to hire top talent full time. Also means that the latest algorithms aren’t published in top journals, only whispered in secret. Also, small hobbyist projects can’t get million dollar compute clusters easily.
My “otherwise” refers to the original hypothetical where only “worldwide regulatory delay” is in effect, not banning of research. So there are still many academic research groups and they publish their research in journals or on arxiv, they are just not OpenAI-shaped individually (presumably the OpenAI-shaped orgs are also still out there, just heavily regulated and probably don’t publish). As a result, the academic research is exactly the kind of thing that explores what’s feasible with the kind of compute that it’s possible for a relatively small unsanctioned AGI project to get their hands on, and 20-30 years of worldwide academic research is a lot of shoulder to stand on.
Hence the alternative I’m talking about of what (regrettably) seems necessary for the 20-30 years from the time when AGI first starts mostly working through enormous frontier models, and the world still remaining not-overturned at the end of this period: driving research underground without official funding and coordinated dissemination, getting rid of uncontrollable GPUs that facilitate it and make it applicable to building AGIs. That’s not in the original hypothetical I’m replying to.
Suppose we get AI regulation that is full half hearted ban.
There are laws against all AI research. If you start a company with a website, offices etc openly saying your doing AI, police visit the office and shut it down.
If you publish an AI paper on a widely frequented bit of the internet under your own name, expect trouble.
If you get a cloud instance from a reputable provider and start running an AI model implemented the obvious way, expect it to probably be shut down.
The large science funders and large tech companies won’t fund AI research. Maybe a few shady ones will do a bit of AI. But the skills aren’t really there. They can’t openly hire AI experts. If they get many people involved someone will blow the whistle. You need to go to the dark web to so much as download a version of tensorflow, and chances are that’s full of viruses.
It’s possible to research AI with your own time, and your own compute. No one will stop you going around various computer stores and buying up GPU. If you are prepared to obfuscate your code, you can get an AI running on cloud compute. If you want to share AI research under a psudonym on obscure internet forums, no one will shut it down. (Extra boost if people are drowning such signal under a pile of plausible looking nonsense)
I would not expect much dangerous research to be done in this world. And implicit skills would fade. The reasons that make it so hard to repeat the moon landings would apply. (everyone has forgotten the details, tech has moved on, details lost, orginizational knowledge not there)
This is not nothing, but I still don’t expect 20-30 years (from first AGI, not from now!) out of this. There are three hypotheticals I see in this thread: (1) my understanding of Nathan Helm-Burger’s hypothetical, where “regulatory delay” means it’s frontier models in particular that are held back, possibly a computethreshold setup with a horizon/frontier distinction where some level of compute (frontier) triggers oversight, and then there is a next level of compute (horizon) that’s not allowed by default or at all; (2) the hypothetical from my response, where all AI research and drm-free GPUs are suppressed; and (3) my understanding of the hypothetical in your response, where only AI research is suppressed, but GPUs are not.
I think 20-30 years of uncontrollable GPU progress or collecting of old GPUs still overdetermines compute-feasible reinvention, even with fewer physically isolated enthusiasts continuing AI research in onionland. Some of those enthusiasts previously took part in a successful AGI project, leaking architectures that were experimentally demonstrated to actually work (the hypothetical starts at demonstration of AGI, not everyone involved will be on board with the subsequent secrecy). There is also the option of spending 10 years on a single training run.
I just want to point out that one of the big hopes for initial survival of AGI is a worldwide regulatory delay in ‘cranking it up’. Thus, AGI could exist in containment in labs but not be being used at ‘full power’ on any practical projects like solving aging or digitizing humans. In this scenario, I’m not sure whether you would count this as AGI not invented yet, since it’s not truly in use, or whether you’d call this invented but survival still unclear since if it got loose we’d all die, or what. Basically, I want to bring up the possibility of a ‘fuzzy post-invention time full of looming danger, spanning perhaps 20-30 years.’
I expect something similar, but not to the extent of 20-30 years from regulatory delay alone. That would take banning research and drm-free GPU manufacturing as well, possibly making it taboo on the cultural level worldwide, and confiscating and destroying existing GPUs (those without inbuilt compute governance features). Otherwise anyone can find dozens of GPUs and a LLaMA to apply the latest algorithmic developments, even when doing so is technically illegal. Preventing small projects is infeasible without a pivotal act worth of change.
I don’t know. Even a technical illegality makes it really hard to start an institution like openAI, where you openly have a large budget to hire top talent full time. Also means that the latest algorithms aren’t published in top journals, only whispered in secret. Also, small hobbyist projects can’t get million dollar compute clusters easily.
My “otherwise” refers to the original hypothetical where only “worldwide regulatory delay” is in effect, not banning of research. So there are still many academic research groups and they publish their research in journals or on arxiv, they are just not OpenAI-shaped individually (presumably the OpenAI-shaped orgs are also still out there, just heavily regulated and probably don’t publish). As a result, the academic research is exactly the kind of thing that explores what’s feasible with the kind of compute that it’s possible for a relatively small unsanctioned AGI project to get their hands on, and 20-30 years of worldwide academic research is a lot of shoulder to stand on.
Hence the alternative I’m talking about of what (regrettably) seems necessary for the 20-30 years from the time when AGI first starts mostly working through enormous frontier models, and the world still remaining not-overturned at the end of this period: driving research underground without official funding and coordinated dissemination, getting rid of uncontrollable GPUs that facilitate it and make it applicable to building AGIs. That’s not in the original hypothetical I’m replying to.
Suppose we get AI regulation that is full half hearted ban.
There are laws against all AI research. If you start a company with a website, offices etc openly saying your doing AI, police visit the office and shut it down.
If you publish an AI paper on a widely frequented bit of the internet under your own name, expect trouble.
If you get a cloud instance from a reputable provider and start running an AI model implemented the obvious way, expect it to probably be shut down.
The large science funders and large tech companies won’t fund AI research. Maybe a few shady ones will do a bit of AI. But the skills aren’t really there. They can’t openly hire AI experts. If they get many people involved someone will blow the whistle. You need to go to the dark web to so much as download a version of tensorflow, and chances are that’s full of viruses.
It’s possible to research AI with your own time, and your own compute. No one will stop you going around various computer stores and buying up GPU. If you are prepared to obfuscate your code, you can get an AI running on cloud compute. If you want to share AI research under a psudonym on obscure internet forums, no one will shut it down. (Extra boost if people are drowning such signal under a pile of plausible looking nonsense)
I would not expect much dangerous research to be done in this world. And implicit skills would fade. The reasons that make it so hard to repeat the moon landings would apply. (everyone has forgotten the details, tech has moved on, details lost, orginizational knowledge not there)
This is not nothing, but I still don’t expect 20-30 years (from first AGI, not from now!) out of this. There are three hypotheticals I see in this thread: (1) my understanding of Nathan Helm-Burger’s hypothetical, where “regulatory delay” means it’s frontier models in particular that are held back, possibly a compute threshold setup with a horizon/frontier distinction where some level of compute (frontier) triggers oversight, and then there is a next level of compute (horizon) that’s not allowed by default or at all; (2) the hypothetical from my response, where all AI research and drm-free GPUs are suppressed; and (3) my understanding of the hypothetical in your response, where only AI research is suppressed, but GPUs are not.
I think 20-30 years of uncontrollable GPU progress or collecting of old GPUs still overdetermines compute-feasible reinvention, even with fewer physically isolated enthusiasts continuing AI research in onionland. Some of those enthusiasts previously took part in a successful AGI project, leaking architectures that were experimentally demonstrated to actually work (the hypothetical starts at demonstration of AGI, not everyone involved will be on board with the subsequent secrecy). There is also the option of spending 10 years on a single training run.