If we assume that humanity has gained access to effectively infinite computing power, and has put AIXItl or something similar into a copy of the universe, simulated at whatever level unifies quantum mechanics and gravitation into a coherent, leakproof framework, AIXItl would have an extremely small belief that it was inside a simulation. Only if the simplest unification of quantum mechanics and gravity turns out to be “we’re in a simulation,” would a hyperintelligent AI in a perfect simulation of our universe come to the belief that it’s in a simulation.
So, the epistemically perfect AI would come to an incorrect decision. This does not imply a flaw in its method for forming beliefs; it merely implies the tautology that there is no way to find out what there is no way to find out.
If we assume that humanity has gained access to effectively infinite computing power, and has put AIXItl or something similar into a copy of the universe, simulated at whatever level unifies quantum mechanics and gravitation into a coherent, leakproof framework, AIXItl would have an extremely small belief that it was inside a simulation. Only if the simplest unification of quantum mechanics and gravity turns out to be “we’re in a simulation,” would a hyperintelligent AI in a perfect simulation of our universe come to the belief that it’s in a simulation.
So, the epistemically perfect AI would come to an incorrect decision. This does not imply a flaw in its method for forming beliefs; it merely implies the tautology that there is no way to find out what there is no way to find out.