This came up in the latest London Meetup, where I voiced a thought I’ve been having for a while. What if we created an epistemic containment area, effectively a simulated universe that contains the problem that we want solved? The AI will not even know anything else outside that universe exists and will have no way of gaining information about it. I think ciphergoth mentioned this is also David Chalmers’ proposal too? In any case, I suspect we could prove containment within such a space, with us having read-only access to the results of the process.
The interesting questions are whether we could invent a containment area which didn’t include clues that it was a simulation, and if the AI deduces that it’s in a simulation, whether it could find a way out.
I think the brain power required to design a simulation of our universe complex enough to include the same problems in it that we want our AI to solve is higher than the brain power of the AI we are having solve the problem.
IMO, people want machine intelligence to help them to attain their goals. Machines can’t do that if they are isolated off in virtual worlds. Sure there will be test harnesses—but it seems rather unlikely that we will keep these things under extensive restraint on
grounds of sheer paranoia—that would stop us from taking advantage of them.
This came up in the latest London Meetup, where I voiced a thought I’ve been having for a while. What if we created an epistemic containment area, effectively a simulated universe that contains the problem that we want solved? The AI will not even know anything else outside that universe exists and will have no way of gaining information about it. I think ciphergoth mentioned this is also David Chalmers’ proposal too? In any case, I suspect we could prove containment within such a space, with us having read-only access to the results of the process.
The interesting questions are whether we could invent a containment area which didn’t include clues that it was a simulation, and if the AI deduces that it’s in a simulation, whether it could find a way out.
So: that is why there are so few clues that we are being simulated!
I think the brain power required to design a simulation of our universe complex enough to include the same problems in it that we want our AI to solve is higher than the brain power of the AI we are having solve the problem.
It doesn’t have to be -our- universe. For instance rule 110 is turing-complete and could therefore act as the containment universe for an AI.
My comment on Chalmers’ blog: