Just to isolate one of (I suspect) very many problems with this, the parenthetical at the end of this paragraph is both totally unjustified and really important to the plausibility of the scenario you suggest:
U(x) Mind Prison Sim: A sim universe which is sufficiently detailed and consistent such that entities with intelligence up to X (using some admittedly heuristic metric), are incredibly unlikely to formulate correct world-beliefs about the outside world and invisible humans (a necessary perquisite for escape)
I assume you mean “prerequisite.” There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.
(It isn’t even true in the fictional inspiration [The Truman Show] you cite for this idea. If I recall, in that film the main character did little more than notice that something was fishy, and then he started pushing hard where it seemed fishy until the entire house of cards collapsed. Why couldn’t a sandboxed AI do the same? How do you know it wouldn’t?)
There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.
I listed these as conjectures, and there absolutely is reason to think we can figure out what kinds of information a super-intelligence would need to arrive at the conclusion “I am in a sandbox”.
There are absolute, provable bounds on intelligence. AIXI is the upper limit—the most intelligent thing possible in the universe. But there are things that even AIXI can not possibly know for certain.
You can easily construct toy universes where it is provably impossible that even AIXI could ever escape. The more important question is how that scales up to big interesting universes.
A Mind Prison is certainly possible on at least a small scale, and we have small proofs already. (for example, AIXI can not escape from a pac-man universe. There is simply not enough information in that universe to learn about anything as complex as humans.)
So you have simply assumed apriori that a Mind Prison is impossible, when it fact that is not the case at all.
The stronger conjectures are just that, conjectures.
But consider this: how do you know that you are not in a Mind Prison right now?
I mentioned the Truman Show only to conjure the idea, but its not really that useful on so many levels: a simulation is naturally vastly better—Truman quickly realized that the world was confining him geographically. (its a movie plot and it would be boring if he remained trapped forever)
Just to isolate one of (I suspect) very many problems with this, the parenthetical at the end of this paragraph is both totally unjustified and really important to the plausibility of the scenario you suggest:
I assume you mean “prerequisite.” There is simply no reason to think that you know what kind of information about the outside world a superintelligence would need to have to escape from its sandbox, and certainly no reason for you to set the bar so conveniently high for your argument.
(It isn’t even true in the fictional inspiration [The Truman Show] you cite for this idea. If I recall, in that film the main character did little more than notice that something was fishy, and then he started pushing hard where it seemed fishy until the entire house of cards collapsed. Why couldn’t a sandboxed AI do the same? How do you know it wouldn’t?)
Thanks, fixed the error.
I listed these as conjectures, and there absolutely is reason to think we can figure out what kinds of information a super-intelligence would need to arrive at the conclusion “I am in a sandbox”.
There are absolute, provable bounds on intelligence. AIXI is the upper limit—the most intelligent thing possible in the universe. But there are things that even AIXI can not possibly know for certain.
You can easily construct toy universes where it is provably impossible that even AIXI could ever escape. The more important question is how that scales up to big interesting universes.
A Mind Prison is certainly possible on at least a small scale, and we have small proofs already. (for example, AIXI can not escape from a pac-man universe. There is simply not enough information in that universe to learn about anything as complex as humans.)
So you have simply assumed apriori that a Mind Prison is impossible, when it fact that is not the case at all.
The stronger conjectures are just that, conjectures.
But consider this: how do you know that you are not in a Mind Prison right now?
I mentioned the Truman Show only to conjure the idea, but its not really that useful on so many levels: a simulation is naturally vastly better—Truman quickly realized that the world was confining him geographically. (its a movie plot and it would be boring if he remained trapped forever)