How sure are you that you are not in an approximate simulation of a more precisely detailed reality, with the precision of your expectations scaled down proportionally with the precision of your observations?
I don’t know if I am or am not in a simulation. But if one has a reasonably FOOMed AI it becomes much more plausible that it would be able to tell. It might be able to detect minor discrepancies. Also, I’d assign a much higher probability to the possibility that I’m in a simulation if I knew that detailed simulations are possible in our universe. If the smart AI determines that it is in a universe that doesn’t allow detailed simulations for at all plausible resource levels then the chance that it is in a simulation should be low.
My point is that the simulation does not have to be as detailed as reality, in part because the agents within the simulation don’t have any reliable experience of being in reality, being themselves less detailed than “real” agents, and so don’t know what level of detail to expect. A simulation could even have simplified reality plus a global rule that manipulates any agent’s working memory to remove any realization it might have that it is in a simulation.
That requires very detailed rules about manipulating agents within the system rather than doing a straight physics simulation (otherwise what do you do when it modifies its memory system). I’m not arguing that it isn’t possibly doable, just that it doesn’t seem necessarily to be likely.
I don’t know if I am or am not in a simulation. But if one has a reasonably FOOMed AI it becomes much more plausible that it would be able to tell. It might be able to detect minor discrepancies. Also, I’d assign a much higher probability to the possibility that I’m in a simulation if I knew that detailed simulations are possible in our universe. If the smart AI determines that it is in a universe that doesn’t allow detailed simulations for at all plausible resource levels then the chance that it is in a simulation should be low.
My point is that the simulation does not have to be as detailed as reality, in part because the agents within the simulation don’t have any reliable experience of being in reality, being themselves less detailed than “real” agents, and so don’t know what level of detail to expect. A simulation could even have simplified reality plus a global rule that manipulates any agent’s working memory to remove any realization it might have that it is in a simulation.
That requires very detailed rules about manipulating agents within the system rather than doing a straight physics simulation (otherwise what do you do when it modifies its memory system). I’m not arguing that it isn’t possibly doable, just that it doesn’t seem necessarily to be likely.