If you, the simulator, think you’re in a simple universe, and you decide to intervene in the simulation… then you should think the simulated people would be perfectly right to infer that they are being intervened on, because it happens in such a simple universe!
Suppose a program p is playing chess (or go or checkers*) against someone or something, and models ‘it’ as a program, and tries to simulate what they will do in response to some moves p is considering.
Then it would think it was in a universe much simpler than ours, and to convince it otherwise we would have to give it a number of bits ~ the difference in complexities.
If you, the simulator, think you’re in a simple universe, and you decide to intervene in the simulation… then you should think the simulated people would be perfectly right to infer that they are being intervened on, because it happens in such a simple universe!
Suppose a program p is playing chess (or go or checkers*) against someone or something, and models ‘it’ as a program, and tries to simulate what they will do in response to some moves p is considering.
*Solved under at least one rule set.
Then it would think it was in a universe much simpler than ours, and to convince it otherwise we would have to give it a number of bits ~ the difference in complexities.
What? Pretty sure chess AIs, aren’t that complex today. They handle a simple world.