The difference between AGI and takeover level AI could be appreciable. If we’re lucky, takeover by raw capability level (as opposed to granted power during application) turns out to be impossible. In any case, we can try to increase world takeover robustness. There’s a certain AI takeover capability level and we should try to push it upwards as much as possible. Insofar AI can help with this, we could use it. The extreme case where the AI takeover capability level never gets reached because of ever increasing defense by AI is called positive defense offense balance.
I can see general internet robustness against hacking as being helpful to increase AI takeover capability. A single IT system that everyone uses (an operating system, a social media platform, etc.) is fragile for hacking so should perhaps better be avoided. Personally, I think an AI able to take over the internet might also be able to take over the world, but some people don’t seem to believe this will happen. Therefore, perhaps also useful to increase the gap between taking over the internet and taking over the world, e.g. by making biowarfare harder, putting weapons offline, etc. Finally, lab safety such as airgapping a novel frontier training run might help as well.
The difference between AGI and takeover level AI could be appreciable. If we’re lucky, takeover by raw capability level (as opposed to granted power during application) turns out to be impossible. In any case, we can try to increase world takeover robustness. There’s a certain AI takeover capability level and we should try to push it upwards as much as possible. Insofar AI can help with this, we could use it. The extreme case where the AI takeover capability level never gets reached because of ever increasing defense by AI is called positive defense offense balance.
I can see general internet robustness against hacking as being helpful to increase AI takeover capability. A single IT system that everyone uses (an operating system, a social media platform, etc.) is fragile for hacking so should perhaps better be avoided. Personally, I think an AI able to take over the internet might also be able to take over the world, but some people don’t seem to believe this will happen. Therefore, perhaps also useful to increase the gap between taking over the internet and taking over the world, e.g. by making biowarfare harder, putting weapons offline, etc. Finally, lab safety such as airgapping a novel frontier training run might help as well.