Stories of AI takeover often involve some form of hacking. This seems like a pretty good reason for using (maybe relatively narrow) AI to improve software security worldwide. Luckily, the private sector should cover it in good measure for financial interests.
I also wonder if the balance of offense vs. defense favors defense here. Usually, recognizing is easier than generating, and this could apply to malicious software. We may have excellent AI antiviruses devoted to the recognizing part, while the AI attackers would have to do the generating part.
[Edit: I’m unsure about the second paragraph here. I’m feeling better about the first paragraph, especially given slow multipolar takeoff and similar, not sure about fast unipolar takeoff]
Hacking is usually not about writing malicious software, it’s about finding vulnerabilities. You can avoid vulnerabilities entirely by provably safe software, but you still need safe hardware, which is tricky, and provably safe software is hell in development. It would be nice if AI companies used provably safe sandboxing, but it would require enormous coordination effort.
And I feel really uneasy about training AI on finding vulnerabilities.
Also “provably safe” is a property a system can have relative to a specific threat model. Many vulnerabilities come from the engineer having an incomplete or incorrect threat model, though (most obviously the multitude of types of side-channel attack).
Stories of AI takeover often involve some form of hacking. This seems like a pretty good reason for using (maybe relatively narrow) AI to improve software security worldwide. Luckily, the private sector should cover it in good measure for financial interests.
I also wonder if the balance of offense vs. defense favors defense here. Usually, recognizing is easier than generating, and this could apply to malicious software. We may have excellent AI antiviruses devoted to the recognizing part, while the AI attackers would have to do the generating part.
[Edit: I’m unsure about the second paragraph here. I’m feeling better about the first paragraph, especially given slow multipolar takeoff and similar, not sure about fast unipolar takeoff]
Hacking is usually not about writing malicious software, it’s about finding vulnerabilities. You can avoid vulnerabilities entirely by provably safe software, but you still need safe hardware, which is tricky, and provably safe software is hell in development. It would be nice if AI companies used provably safe sandboxing, but it would require enormous coordination effort. And I feel really uneasy about training AI on finding vulnerabilities.
Also “provably safe” is a property a system can have relative to a specific threat model. Many vulnerabilities come from the engineer having an incomplete or incorrect threat model, though (most obviously the multitude of types of side-channel attack).