While I can’t quantify, I think secure computer systems would help a lot by limiting the options of an AI attempting malicious actions.
Imagine a near-AGI system with uneven capabilities compared to humans. Maybe its GPT-like (natural language interaction) and Copilot-like (code understanding and generation) capabilities pass humans but robotics lags behind. More generally, in virtual domains, especially those involving strings of characters, it’s superior, but elsewhere it’s inferior. This is all easy to imagine because it’s just assuming the relative balance of capabilities remains similar to what it is today.
Such a near-AGI system would presumably be superhuman at cyber-attacking. After all, that plays to its strengths. It’d be great at both finding new vulnerabilities and exploiting known ones. Having impenetrable cyber-defenses would neutralize this advantage.
Could the near-AGI system improve its robotics capabilities to gain an advantage in the physical world too? Probably, but that might take a significant amount of time. Doing things in the physical world is hard. No matter how smart you are, your mental model of the world is a simplification of true physical reality, so you will need to run experiments, which takes time and resources. That’s unlike AlphaZero, for example, which can exceed human capabilities quickly because its experiments (self-play games) take place in a perfectly accurate simulation.
One last thing to consider is that provable security has the nice property that you can make progress on it without knowing the nature of the AI you’ll be up against. Having robust cyber-defense will help whether AIs turn out to be deep-learning-based or something else entirely. That makes it in some sense a safe bet, even though it obviously can’t solve AGI risk on its own.
While I can’t quantify, I think secure computer systems would help a lot by limiting the options of an AI attempting malicious actions.
Imagine a near-AGI system with uneven capabilities compared to humans. Maybe its GPT-like (natural language interaction) and Copilot-like (code understanding and generation) capabilities pass humans but robotics lags behind. More generally, in virtual domains, especially those involving strings of characters, it’s superior, but elsewhere it’s inferior. This is all easy to imagine because it’s just assuming the relative balance of capabilities remains similar to what it is today.
Such a near-AGI system would presumably be superhuman at cyber-attacking. After all, that plays to its strengths. It’d be great at both finding new vulnerabilities and exploiting known ones. Having impenetrable cyber-defenses would neutralize this advantage.
Could the near-AGI system improve its robotics capabilities to gain an advantage in the physical world too? Probably, but that might take a significant amount of time. Doing things in the physical world is hard. No matter how smart you are, your mental model of the world is a simplification of true physical reality, so you will need to run experiments, which takes time and resources. That’s unlike AlphaZero, for example, which can exceed human capabilities quickly because its experiments (self-play games) take place in a perfectly accurate simulation.
One last thing to consider is that provable security has the nice property that you can make progress on it without knowing the nature of the AI you’ll be up against. Having robust cyber-defense will help whether AIs turn out to be deep-learning-based or something else entirely. That makes it in some sense a safe bet, even though it obviously can’t solve AGI risk on its own.