I’d guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.
I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states).
Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents). At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries.
I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case—there’s not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country’s response seems much worse than the difficulty of the problem would lead you to believe.
Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we’ve failed to ban. Plus, they’re super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we’ve really seen solved via international coordination?
nuclear non proliferation [to the extent that it has been achieved] is probably harder than a ban on gain-of-function
that’s sufficient to prove Daniel’s original criticism of the OP—that governments can [probably] fail at something yet succeed at some harder thing.
(And on a tangent, I’d guess a salient warning shot—which the OP was conditioning on—would give the US + China strong incentives to discourage risky AI stuff.)
I’d guess the very slow rate of nuclear proliferation has been much harder to achieve than banning gain-of-function research would be, since, in the absence of intervention, incentives to get nukes would have been much bigger than incentives to do gain-of-function research.
Also, on top of the taboo against chemical weapons, there was the verified destruction of most chemical weapons globally.
I agree that nuclear non proliferation is probably harder than a ban on gain-of-function. But in this case, the US + USSR both had a strong incentive to discourage nuclear proliferation, and had enough leverage to coerce smaller states to not work on nuclear weapon development (e.g one or the other were the security provider for the current government of said states).
Ditto with chemical weapons, which seem to have lost battlefield relevance to conflicts between major powers (ie it did not actually break the trench warfare stalemate in WWI even when deployed on a massive scale, and is mainly useful as a weapon of terror against weaker opponents). At this point, the moral arguments + downside risk of chemical attacks vs their own citizens shifted the calculus for major powers. Then the major powers were able to enforce the ban somewhat successfully on smaller countries.
I do think that banning GoF (especially on pathogens that already have or are likely to cause a human pandemic) should be ~ as hard as the chemical weapons case—there’s not much benefit to doing it, and the downside risk is massive. My guess is a generally sane response to COVID is harder, since it required getting many things right, though I think the median country’s response seems much worse than the difficulty of the problem would lead you to believe.
Unfortunately, I think that AGI relevant research has way more utility than many of the military technologies that we’ve failed to ban. Plus, they’re super financially profitable, instead of being expensive to maintain. So the problem for AGI is harder than the problems we’ve really seen solved via international coordination?
I agree with a lot of that. Still, if
that’s sufficient to prove Daniel’s original criticism of the OP—that governments can [probably] fail at something yet succeed at some harder thing.
(And on a tangent, I’d guess a salient warning shot—which the OP was conditioning on—would give the US + China strong incentives to discourage risky AI stuff.)