I think moratorium is basically intractable short of a totalitarian world government cracking down on all personal computers.
Unless you mean just a moratorium on large training runs, in which case I think it buys a minor delay at best, and comes with counterproductive pressures on researchers to focus heavily on diverse small-scale algorithmic efficiency experiments.
But if we are in a scenario in the future where the offense-defense balance of bioweapons remains similar to how it is today, then a single dose of pseudoephedrine going unregulated by the government and getting turned into methamphetamine could result in the majority of humanity being wiped out.
Pseudoephedrine is regulated, yes, but not so strongly that literally none slips past the enforcement. With the stakes so high, a mostly effective enforcement scheme doesn’t cut it.
That’s only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from “it’s possible to build superintelligence with a huge multi-billion-dollar project” and “it’s possible to build superintelligence on a few consumer GPUs”. (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it’s moot.)
Sadly, no. It doesn’t take superintelligence to be deadly. Even current open-weight LLMs, like Llama 3 70B, know quite a lot about genetic engineering. The combination of a clever and malicious human, and an LLM able to offer help and advice is sufficient.
Furthermore, there is the consideration of “seed AI” which is competent enough to improve and not plateau. If you have a competent human helping it and getting it unstuck, then the bar is even lower. My prediction is that the bar for “seed AI” is lower than the bar for AGI.
I think moratorium is basically intractable short of a totalitarian world government cracking down on all personal computers.
Unless you mean just a moratorium on large training runs, in which case I think it buys a minor delay at best, and comes with counterproductive pressures on researchers to focus heavily on diverse small-scale algorithmic efficiency experiments.
I don’t think controlling compute would be qualitatively harder than controlling, say, pseudoephedrine.
(I think it would be harder, but not qualitatively harder—the same sorts of strategies would work.)
I agree that some amount of control is possible.
But if we are in a scenario in the future where the offense-defense balance of bioweapons remains similar to how it is today, then a single dose of pseudoephedrine going unregulated by the government and getting turned into methamphetamine could result in the majority of humanity being wiped out.
Pseudoephedrine is regulated, yes, but not so strongly that literally none slips past the enforcement. With the stakes so high, a mostly effective enforcement scheme doesn’t cut it.
That’s only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from “it’s possible to build superintelligence with a huge multi-billion-dollar project” and “it’s possible to build superintelligence on a few consumer GPUs”. (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it’s moot.)
Sadly, no. It doesn’t take superintelligence to be deadly. Even current open-weight LLMs, like Llama 3 70B, know quite a lot about genetic engineering. The combination of a clever and malicious human, and an LLM able to offer help and advice is sufficient.
Furthermore, there is the consideration of “seed AI” which is competent enough to improve and not plateau. If you have a competent human helping it and getting it unstuck, then the bar is even lower. My prediction is that the bar for “seed AI” is lower than the bar for AGI.