Yes, I’m saying that to get human-like learning the AI has to have the ability to write code that it will later use to perform cognitive tasks. You can’t get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.
So that presents you with the problem of figuring out which AI-written code fragments are safe, not just in isolation, but in all their interactions with every other code fragment the AI will ever write. This is the same kind of problem as creating a secure browser or Java sandbox, only worse. Given that no one has ever come close to solving it for the easy case of resisting human hackers without constant patches, it seems very unrealistic to think that any ad-hoc approach is going to work.
You can’t get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.
You can’t? The entire genre of security exploits building a Turing-complete language out of library fragments (libc is a popular target) suggests that a hand-coded program certainly could be exploited, inasmuch as pretty much all programs like libc are hand-coded these days.
I’ve found Turing-completeness (and hence the possibility of an AI) can lurk in the strangest places.
If I understand you correctly, you’re asserting that nobody has ever come close to writing a sandbox in which code can run but not “escape”. I was under the impression that this had been done perfectly, many, many times. Am I wrong?
There are different kinds of escape. No Java program has every convinced a human to edit the security-permissions file on computer where the Java program is running. But that could be a good way to escape the sandbox.
Yes, I’m saying that to get human-like learning the AI has to have the ability to write code that it will later use to perform cognitive tasks. You can’t get human-level intelligence out of a hand-coded program operating on a passive database of information using only fixed, hand-written algorithms.
So that presents you with the problem of figuring out which AI-written code fragments are safe, not just in isolation, but in all their interactions with every other code fragment the AI will ever write. This is the same kind of problem as creating a secure browser or Java sandbox, only worse. Given that no one has ever come close to solving it for the easy case of resisting human hackers without constant patches, it seems very unrealistic to think that any ad-hoc approach is going to work.
You can’t? The entire genre of security exploits building a Turing-complete language out of library fragments (libc is a popular target) suggests that a hand-coded program certainly could be exploited, inasmuch as pretty much all programs like
libc
are hand-coded these days.I’ve found Turing-completeness (and hence the possibility of an AI) can lurk in the strangest places.
If I understand you correctly, you’re asserting that nobody has ever come close to writing a sandbox in which code can run but not “escape”. I was under the impression that this had been done perfectly, many, many times. Am I wrong?
There are different kinds of escape. No Java program has every convinced a human to edit the security-permissions file on computer where the Java program is running. But that could be a good way to escape the sandbox.