That is a monumentally difficult undertaking, unfeasible with current hardware limitations
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
So to be clear, this UFAI breakout scenario is assuming the AI already has access to massive amounts of computing hardware, which it can re-purpose to long-duration HPC applications while escaping detection. And even if you find that realistic, I still wouldn’t use the word “momentarily.”
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
They have done that already. For example, this paper: “We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.”
You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library.
Large clusters like… the ones that an AI would be running on?
They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?
It requires an infeasible amount of computation for us humans to do.
Um. Humans—in real life—do run security audits of software. It’s nothing rare or unusual. Frequently these audits are assisted by automated tools (e.g. checking for buffer overruns, etc.). Again, this is happening right now in real life and there are no “infeasible amount of computation” problems.
Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.
It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.
That is a monumentally difficult undertaking, unfeasible with current hardware limitations, certainly impossible in the “moments” timescale.
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
So to be clear, this UFAI breakout scenario is assuming the AI already has access to massive amounts of computing hardware, which it can re-purpose to long-duration HPC applications while escaping detection. And even if you find that realistic, I still wouldn’t use the word “momentarily.”
They have done that already. For example, this paper: “We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.”
Large clusters like… the ones that an AI would be running on?
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?
Why in the world would a security audit of a bunch of code be “monumentally difficult” for an AI..?
It requires an infeasible amount of computation for us humans to do. Why do you suppose it would be different for an AI?
Um. Humans—in real life—do run security audits of software. It’s nothing rare or unusual. Frequently these audits are assisted by automated tools (e.g. checking for buffer overruns, etc.). Again, this is happening right now in real life and there are no “infeasible amount of computation” problems.
Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.
It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.