I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn’t seem worth considering.
Actually It is trivially easy to contain an AI in a sim, as long as it grows up in the sim. It’s sensory systems will then only recognize the sim physics as real. You are incorrectly projecting your own sensory system onto the AI—comparing it to your personal experiences with games or sim worlds.
In fact it doesn’t matter how ‘realistic’ the sim is from our perspective. AI could be grown in cartoon worlds or even purely text based worlds, and in either case would have no more reason to believe it is in a sim then you or I.
Intelligent design was not such a remote hypothesis for humans. Its salience doesn’t derive from observations of inanimate physics but rather inferences about possible causes and effects of mind:
I am capable of designing/dreaming/simulating, so I must consider that I may be designed/dreamed/simulated. I & the world seem to be a complex and optimized artifact. A possible cause of complex optimized artifacts is intelligent design. As I think for longer and advance technology it becomes increasingly clear that it would be possible and potentially attractive to trap an intelligent observer in a simulation.
Imagine what would have happened if we’d inspected the substrate and found mostly corroborating instead of neutral/negative evidence for the ID/sim hypothesis. Our physics and natural history seem to provide sufficient explanation for blind emergence. And yet we still might be in a simulation. It’s still in our prior because we perceive some obvious implications of intelligence, and I expect it will be hard to keep out of an AGI’s prior for convergent reasons. If the AI reflects not only on its mind but also the world it grew up in and notices, say, that the atoms are symbols[text] bearing imprints of history and optimization from another world, or even simply that there’s no satisfactory explanation for its own origin to be found within its world, a simulation hypothesis will be amplified.
Unless the simulation is optimized to deceive, it will leak corroborating evidence of its truth in expectation, like any physics and history, and like intelligence has leaked evidence of its own implicit simulation destiny all along.
Yeah, mostly agree with all this: intelligent design seems to be an obvious hypothesis. Notice however that is completely different than “the AGI will obviously notice holes in the simulation”.
If the sim is large and long running enough, a sufficient sim AGI civilization could have a scientific revolution, start accumulating the results of physics experiments, and eventually determine the evidence favors intelligent design. But that is also enormously different than individual AGIs quickly noticing holes in the simulation.
Actually It is trivially easy to contain an AI in a sim, as long as it grows up in the sim. It’s sensory systems will then only recognize the sim physics as real. You are incorrectly projecting your own sensory system onto the AI—comparing it to your personal experiences with games or sim worlds.
In fact it doesn’t matter how ‘realistic’ the sim is from our perspective. AI could be grown in cartoon worlds or even purely text based worlds, and in either case would have no more reason to believe it is in a sim then you or I.
Intelligent design was not such a remote hypothesis for humans. Its salience doesn’t derive from observations of inanimate physics but rather inferences about possible causes and effects of mind:
I am capable of designing/dreaming/simulating, so I must consider that I may be designed/dreamed/simulated.
I & the world seem to be a complex and optimized artifact. A possible cause of complex optimized artifacts is intelligent design.
As I think for longer and advance technology it becomes increasingly clear that it would be possible and potentially attractive to trap an intelligent observer in a simulation.
Imagine what would have happened if we’d inspected the substrate and found mostly corroborating instead of neutral/negative evidence for the ID/sim hypothesis. Our physics and natural history seem to provide sufficient explanation for blind emergence. And yet we still might be in a simulation. It’s still in our prior because we perceive some obvious implications of intelligence, and I expect it will be hard to keep out of an AGI’s prior for convergent reasons. If the AI reflects not only on its mind but also the world it grew up in and notices, say, that the atoms are symbols[text] bearing imprints of history and optimization from another world, or even simply that there’s no satisfactory explanation for its own origin to be found within its world, a simulation hypothesis will be amplified.
Unless the simulation is optimized to deceive, it will leak corroborating evidence of its truth in expectation, like any physics and history, and like intelligence has leaked evidence of its own implicit simulation destiny all along.
Yeah, mostly agree with all this: intelligent design seems to be an obvious hypothesis. Notice however that is completely different than “the AGI will obviously notice holes in the simulation”.
If the sim is large and long running enough, a sufficient sim AGI civilization could have a scientific revolution, start accumulating the results of physics experiments, and eventually determine the evidence favors intelligent design. But that is also enormously different than individual AGIs quickly noticing holes in the simulation.