I was about to make the same point. GPTx is trying to hack GPTx-1 at best. Unless there is a very sudden takeoff, important software will be checked, rechecked by all capable AI. Yud seems to miss this, (or believe the hard takeoff is so sudden that there won’t be any GPTx-1 to make the code secure).
I remember when spam used to be a thing and people were breathlessly predicting a flood of Android viruses… Attack doesn’t always get easier.
Yep, cybersecurity is the biggest area where I suspect intelligence improvements will let the defense go very far ahead relative to attack, for complexity theory reasons, as well as other sorts of cryptography in the works.
IMO, if we survive, our era of hacking being easy will look like a lot the pirates’ history: Used to be a big threat, but we no longer care because pirates can’t succeed anymore.
Cryptography vs Cryptanalysis will probably go the same way as the anti-piracy forces vs pirate forces, a decisive victory for the defense in time.
Yes AI advances help both the attacker and defender. In some cases like spam and real time content moderation, they enable capabilities for the defender that it simply didn’t have before. In others it elevates both sides in the arms race and it’s not immediately clear what equilibrium we end up in.
In particular re hacking / vulnerabilities it’s less clear who it helps more. It might also change with time, with initially AI enabling “script kiddies” that can hack systems without much skill, and then an AI search for vulnerabilities and then fixing them becomes part of the standard pipeline. (Or if we’re lucky then the second phase happens before the first.)
Lucky or intentional. Exploit embargoes artificially weight the balance towards the defender—we should create a strong norm of providing defender access first in AI.
Yes the norms of responsible disclosures of security vulnerabilities, where potentially affected companies gets advanced notice before public disclosure, can and should be used for vulnerability-discovering AIs as well.
I think it’s clear in the scenario of hacker vs defender, the defender has a terminal state of being unhackable while the hacker has no such terminal state.
Yes in the asymptotic limit the defender could get to a bug free software. But until the. It’s not clear who is helped the most by advances. In particular sometimes attackers can be more agile in exploiting new vulnerabilities while patching them could take long. (Case in point, it took ages to get the insecure hash function MD5 out of deployed security sensitive code even by companies such as Microsoft; I might be misremembering but if I recall correctly Stuxnet relied on such a vulnerability.)
This is because there probably wasn’t a huge reason to (stuxnet was done with massive resources, maybe not frequent t enough to justify fixing) and engineering time is expensive. As long as bandaid patches are available then the same AI can just be used to patch all these vulnerabilities. Also engineering time probably goes down if you have exploit finding AI.
I was about to make the same point. GPTx is trying to hack GPTx-1 at best. Unless there is a very sudden takeoff, important software will be checked, rechecked by all capable AI. Yud seems to miss this, (or believe the hard takeoff is so sudden that there won’t be any GPTx-1 to make the code secure).
I remember when spam used to be a thing and people were breathlessly predicting a flood of Android viruses… Attack doesn’t always get easier.
Yep, cybersecurity is the biggest area where I suspect intelligence improvements will let the defense go very far ahead relative to attack, for complexity theory reasons, as well as other sorts of cryptography in the works.
IMO, if we survive, our era of hacking being easy will look like a lot the pirates’ history: Used to be a big threat, but we no longer care because pirates can’t succeed anymore.
Cryptography vs Cryptanalysis will probably go the same way as the anti-piracy forces vs pirate forces, a decisive victory for the defense in time.
Yes AI advances help both the attacker and defender. In some cases like spam and real time content moderation, they enable capabilities for the defender that it simply didn’t have before. In others it elevates both sides in the arms race and it’s not immediately clear what equilibrium we end up in.
In particular re hacking / vulnerabilities it’s less clear who it helps more. It might also change with time, with initially AI enabling “script kiddies” that can hack systems without much skill, and then an AI search for vulnerabilities and then fixing them becomes part of the standard pipeline. (Or if we’re lucky then the second phase happens before the first.)
Lucky or intentional. Exploit embargoes artificially weight the balance towards the defender—we should create a strong norm of providing defender access first in AI.
Yes the norms of responsible disclosures of security vulnerabilities, where potentially affected companies gets advanced notice before public disclosure, can and should be used for vulnerability-discovering AIs as well.
I think it’s clear in the scenario of hacker vs defender, the defender has a terminal state of being unhackable while the hacker has no such terminal state.
Yes in the asymptotic limit the defender could get to a bug free software. But until the. It’s not clear who is helped the most by advances. In particular sometimes attackers can be more agile in exploiting new vulnerabilities while patching them could take long. (Case in point, it took ages to get the insecure hash function MD5 out of deployed security sensitive code even by companies such as Microsoft; I might be misremembering but if I recall correctly Stuxnet relied on such a vulnerability.)
This is because there probably wasn’t a huge reason to (stuxnet was done with massive resources, maybe not frequent t enough to justify fixing) and engineering time is expensive. As long as bandaid patches are available then the same AI can just be used to patch all these vulnerabilities. Also engineering time probably goes down if you have exploit finding AI.