As others have said, if an AI is truly superintelligent, there are many paths to world takeover. That doesn’t mean that it isn’t worth fortifying the world against takeover; rather, it means that defenses only help if they’re targeted at the world’s weakest link, for some axis of weakness. In particular that means finding the civilizational weakness with the low bar for how smart the AI system needs to be, and raising the bar there. This buys time in the race between AI capability and AI alignment, and buys the extra time at the endgame when time is most valuable.
I don’t think regulating wetlabs is very promising, from this perspective, because as AI world takeover plans go, “solve molecular nanotech via a wetlab” is on the very-high-end of intelligence required, and, if the AI is smart enough to figure out the nanotech part, it can certainly find ways around any roadblocks you place at the bootstrap-molecule-synthesis step.
As others have said, if an AI is truly superintelligent, there are many paths to world takeover. That doesn’t mean that it isn’t worth fortifying the world against takeover; rather, it means that defenses only help if they’re targeted at the world’s weakest link, for some axis of weakness. In particular that means finding the civilizational weakness with the low bar for how smart the AI system needs to be, and raising the bar there. This buys time in the race between AI capability and AI alignment, and buys the extra time at the endgame when time is most valuable.
I don’t think regulating wetlabs is very promising, from this perspective, because as AI world takeover plans go, “solve molecular nanotech via a wetlab” is on the very-high-end of intelligence required, and, if the AI is smart enough to figure out the nanotech part, it can certainly find ways around any roadblocks you place at the bootstrap-molecule-synthesis step.