I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn’t seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be correspondingly less useful to reality, and could not be seen as a valid test of whether an AI really is friendly.
Oh, and there is still the core issue of boxed AI: It’s very possible that a boxed superintelligent GAI will see holes in the box that we are not smart enough to see, and there’s no way around that.
So… can it be said that the advent of an AGI will also provide a satisfactory answer to the question whether we currently are in a simulation? That is what you (and avturchin) seem to imply. Also, this stance presupposes that:
- an AGI can ascertain such observations to be highly probable/certain; - it is theoretically possible to find out the true nature of ones world (and that a super-intelligent AI would be able to do this); - it will inevitably embark on a quest to ascertain the nature and fundamental facts about its reality; - we can expect a “question absolutely everything” attitude from an AGI (something that is not necessarily desirable, especially in matters where facts may be hard to come by/a matter of choice or preference).
Or am I actually missing something here? I am assuming that is very probable ;)
I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn’t seem worth considering.
Actually It is trivially easy to contain an AI in a sim, as long as it grows up in the sim. It’s sensory systems will then only recognize the sim physics as real. You are incorrectly projecting your own sensory system onto the AI—comparing it to your personal experiences with games or sim worlds.
In fact it doesn’t matter how ‘realistic’ the sim is from our perspective. AI could be grown in cartoon worlds or even purely text based worlds, and in either case would have no more reason to believe it is in a sim then you or I.
Intelligent design was not such a remote hypothesis for humans. Its salience doesn’t derive from observations of inanimate physics but rather inferences about possible causes and effects of mind:
I am capable of designing/dreaming/simulating, so I must consider that I may be designed/dreamed/simulated. I & the world seem to be a complex and optimized artifact. A possible cause of complex optimized artifacts is intelligent design. As I think for longer and advance technology it becomes increasingly clear that it would be possible and potentially attractive to trap an intelligent observer in a simulation.
Imagine what would have happened if we’d inspected the substrate and found mostly corroborating instead of neutral/negative evidence for the ID/sim hypothesis. Our physics and natural history seem to provide sufficient explanation for blind emergence. And yet we still might be in a simulation. It’s still in our prior because we perceive some obvious implications of intelligence, and I expect it will be hard to keep out of an AGI’s prior for convergent reasons. If the AI reflects not only on its mind but also the world it grew up in and notices, say, that the atoms are symbols[text] bearing imprints of history and optimization from another world, or even simply that there’s no satisfactory explanation for its own origin to be found within its world, a simulation hypothesis will be amplified.
Unless the simulation is optimized to deceive, it will leak corroborating evidence of its truth in expectation, like any physics and history, and like intelligence has leaked evidence of its own implicit simulation destiny all along.
Yeah, mostly agree with all this: intelligent design seems to be an obvious hypothesis. Notice however that is completely different than “the AGI will obviously notice holes in the simulation”.
If the sim is large and long running enough, a sufficient sim AGI civilization could have a scientific revolution, start accumulating the results of physics experiments, and eventually determine the evidence favors intelligent design. But that is also enormously different than individual AGIs quickly noticing holes in the simulation.
I would guess that one reason this containment method has not been seriously considered is because the amount of detail in a simulation required for the AI to be able to do anything that we find useful is so far beyond our current capabilities that it doesn’t seem worth considering. The case you present of an exact copy of our earth would require a ridiculous amount of processing power at the very least, and consider that the simulation of billions of human brains in this copy would already constitute a form of GAI. A simulation with less detail would be correspondingly less useful to reality, and could not be seen as a valid test of whether an AI really is friendly.
Oh, and there is still the core issue of boxed AI: It’s very possible that a boxed superintelligent GAI will see holes in the box that we are not smart enough to see, and there’s no way around that.
So… can it be said that the advent of an AGI will also provide a satisfactory answer to the question whether we currently are in a simulation? That is what you (and avturchin) seem to imply. Also, this stance presupposes that:
- an AGI can ascertain such observations to be highly probable/certain;
- it is theoretically possible to find out the true nature of ones world (and that a super-intelligent AI would be able to do this);
- it will inevitably embark on a quest to ascertain the nature and fundamental facts about its reality;
- we can expect a “question absolutely everything” attitude from an AGI (something that is not necessarily desirable, especially in matters where facts may be hard to come by/a matter of choice or preference).
Or am I actually missing something here? I am assuming that is very probable ;)
Actually It is trivially easy to contain an AI in a sim, as long as it grows up in the sim. It’s sensory systems will then only recognize the sim physics as real. You are incorrectly projecting your own sensory system onto the AI—comparing it to your personal experiences with games or sim worlds.
In fact it doesn’t matter how ‘realistic’ the sim is from our perspective. AI could be grown in cartoon worlds or even purely text based worlds, and in either case would have no more reason to believe it is in a sim then you or I.
Intelligent design was not such a remote hypothesis for humans. Its salience doesn’t derive from observations of inanimate physics but rather inferences about possible causes and effects of mind:
I am capable of designing/dreaming/simulating, so I must consider that I may be designed/dreamed/simulated.
I & the world seem to be a complex and optimized artifact. A possible cause of complex optimized artifacts is intelligent design.
As I think for longer and advance technology it becomes increasingly clear that it would be possible and potentially attractive to trap an intelligent observer in a simulation.
Imagine what would have happened if we’d inspected the substrate and found mostly corroborating instead of neutral/negative evidence for the ID/sim hypothesis. Our physics and natural history seem to provide sufficient explanation for blind emergence. And yet we still might be in a simulation. It’s still in our prior because we perceive some obvious implications of intelligence, and I expect it will be hard to keep out of an AGI’s prior for convergent reasons. If the AI reflects not only on its mind but also the world it grew up in and notices, say, that the atoms are symbols[text] bearing imprints of history and optimization from another world, or even simply that there’s no satisfactory explanation for its own origin to be found within its world, a simulation hypothesis will be amplified.
Unless the simulation is optimized to deceive, it will leak corroborating evidence of its truth in expectation, like any physics and history, and like intelligence has leaked evidence of its own implicit simulation destiny all along.
Yeah, mostly agree with all this: intelligent design seems to be an obvious hypothesis. Notice however that is completely different than “the AGI will obviously notice holes in the simulation”.
If the sim is large and long running enough, a sufficient sim AGI civilization could have a scientific revolution, start accumulating the results of physics experiments, and eventually determine the evidence favors intelligent design. But that is also enormously different than individual AGIs quickly noticing holes in the simulation.