Yes well I thin XiXiDu did himself a disfavor there. If Snowden is to be believed and as various state-sponsored botnets (Stuxnet, Flame, BadBIOS(?)) have shown, the NSA has already “taken over” the internet. They may not have root access on any arbitrary internet-connected machine, but they could get it if they wanted.
My objection (and his?) is against the claim that an AI could replicate this capability in “moments,” according to the “because superhuman!” line of reasoning. I find that bogus.
My objection (and his?) is against the claim that an AI could replicate this capability in “moments,” according to the “because superhuman!” line of reasoning. I find that bogus.
An AI probably wouldn’t need to decompile anything—given the kind of optimizations that one could apply, there’s no particularly strong reason to think one would be any less comfortable in native machine code or, say, Java bytecode than in source. The only reason we are is that it’s closer to natural language and we’re bad at keeping track of a lot of disaggregated state.
That is a monumentally difficult undertaking, unfeasible with current hardware limitations
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
So to be clear, this UFAI breakout scenario is assuming the AI already has access to massive amounts of computing hardware, which it can re-purpose to long-duration HPC applications while escaping detection. And even if you find that realistic, I still wouldn’t use the word “momentarily.”
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
They have done that already. For example, this paper: “We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.”
You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library.
Large clusters like… the ones that an AI would be running on?
They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?
It requires an infeasible amount of computation for us humans to do.
Um. Humans—in real life—do run security audits of software. It’s nothing rare or unusual. Frequently these audits are assisted by automated tools (e.g. checking for buffer overruns, etc.). Again, this is happening right now in real life and there are no “infeasible amount of computation” problems.
Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.
It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.
Yes well I thin XiXiDu did himself a disfavor there. If Snowden is to be believed and as various state-sponsored botnets (Stuxnet, Flame, BadBIOS(?)) have shown, the NSA has already “taken over” the internet. They may not have root access on any arbitrary internet-connected machine, but they could get it if they wanted.
My objection (and his?) is against the claim that an AI could replicate this capability in “moments,” according to the “because superhuman!” line of reasoning. I find that bogus.
Let me suggest a way:
(1) Gain control of a single machine
(2) Decompile the OS code
(3) Run a security audit on the OS, find exploits
Even easier if the OS is open-sourced.
An AI probably wouldn’t need to decompile anything—given the kind of optimizations that one could apply, there’s no particularly strong reason to think one would be any less comfortable in native machine code or, say, Java bytecode than in source. The only reason we are is that it’s closer to natural language and we’re bad at keeping track of a lot of disaggregated state.
That is a monumentally difficult undertaking, unfeasible with current hardware limitations, certainly impossible in the “moments” timescale.
I think you underestimate the state of the art, such as the SAT/SMT-solver revolution in computer security. They automatically find exploits all the time, against OSes and libraries and APIs.
I think you miss my point. These SAT solvers are extremely expensive, and don’t scale well to large code bases. You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library. They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
So to be clear, this UFAI breakout scenario is assuming the AI already has access to massive amounts of computing hardware, which it can re-purpose to long-duration HPC applications while escaping detection. And even if you find that realistic, I still wouldn’t use the word “momentarily.”
They have done that already. For example, this paper: “We implement our approach using a popular graph database and demonstrate its efficacy by identifying 18 previously unknown vulnerabilities in the source code of the Linux kernel.”
Large clusters like… the ones that an AI would be running on?
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?
Why in the world would a security audit of a bunch of code be “monumentally difficult” for an AI..?
It requires an infeasible amount of computation for us humans to do. Why do you suppose it would be different for an AI?
Um. Humans—in real life—do run security audits of software. It’s nothing rare or unusual. Frequently these audits are assisted by automated tools (e.g. checking for buffer overruns, etc.). Again, this is happening right now in real life and there are no “infeasible amount of computation” problems.
Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.
It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.