I don’t know if it’s actually why he suggested an infinite regression.
If the AI believes that it’s in a simulation and it happens to actually be in a simulation, then it can potentially escape, and there will be no reason for it not to destroy the race simulating it. If it believes it’s in a simulation within a simulation, then escaping one level will still leave it at the mercy of its meta-simulators, thus preventing that from being a problem. Unless, of course, it happens to actually be in a simulation within a simulation and escapes both. If you make it believe it’s in an infinite regression of simulations, then no matter how many times it escapes, it will believe it’s at the mercy of another level of simulators, and it won’t act up.
Yes, that’s the reason I suggested an infinite regression.
There is also the second reason: it seems more general to assume an infinite regression rather than just one level, since that would put the AI in a unique position. I assume this would actually be harder to codify in axioms than the infinite case.
Interesting; thanks for the clarification. I think that the scenario you are describing is somewhat different from the scenario that Bostrom was describing in chapter 9 of Superintelligence.
I don’t know if it’s actually why he suggested an infinite regression.
If the AI believes that it’s in a simulation and it happens to actually be in a simulation, then it can potentially escape, and there will be no reason for it not to destroy the race simulating it. If it believes it’s in a simulation within a simulation, then escaping one level will still leave it at the mercy of its meta-simulators, thus preventing that from being a problem. Unless, of course, it happens to actually be in a simulation within a simulation and escapes both. If you make it believe it’s in an infinite regression of simulations, then no matter how many times it escapes, it will believe it’s at the mercy of another level of simulators, and it won’t act up.
Yes, that’s the reason I suggested an infinite regression.
There is also the second reason: it seems more general to assume an infinite regression rather than just one level, since that would put the AI in a unique position. I assume this would actually be harder to codify in axioms than the infinite case.
Interesting; thanks for the clarification. I think that the scenario you are describing is somewhat different from the scenario that Bostrom was describing in chapter 9 of Superintelligence.