I vaguely see what you’re getting at—every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I’m not sure how exactly this would work on the level of bits and programs?
At the level you’re asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.
Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn’t have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.
Spoken like a true theoretician. But it’s hard to see an implementation that is optimal in exploiting this memory bound.
I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.
If we start out with 1 1000, and then the ‘laws of physics’ begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?
Is there any representation of the universe’s state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like ‘if (changed? x), then if x > 128, convert x Integer; x ⇐ 128 && > 1, convert x int; else convert x bit’ in order to preserve the constant-memory usage?
I don’t think this hypothetical universe is comparable in the relevant ways: it must be capable of representing the concept of an observer, and what that observer knows (has mutual information with), and adhere to the 2nd law of thermodynamics. Which I don’t think is the case here.
No, that’s the point Jack brought up. I was only discussing the issues that arise in the hypothetical scenario in which the universe is simulated in an “overworld” and must successfully continue to fool us.
I vaguely see what you’re getting at—every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I’m not sure how exactly this would work on the level of bits and programs?
This is a very conceptually interesting question.
Bah! Implementation issue! :-P
At the level you’re asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.
Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn’t have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.
Spoken like a true theoretician. But it’s hard to see an implementation that is optimal in exploiting this memory bound.
I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.
If we start out with 1 1000, and then the ‘laws of physics’ begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?
Is there any representation of the universe’s state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like ‘if (changed? x), then if x > 128, convert x Integer; x ⇐ 128 && > 1, convert x int; else convert x bit’ in order to preserve the constant-memory usage?
I don’t think this hypothetical universe is comparable in the relevant ways: it must be capable of representing the concept of an observer, and what that observer knows (has mutual information with), and adhere to the 2nd law of thermodynamics. Which I don’t think is the case here.
Wait, there has to be an observer? I thought you were really just talking about entangled wave-functions etc.
No, that’s the point Jack brought up. I was only discussing the issues that arise in the hypothetical scenario in which the universe is simulated in an “overworld” and must successfully continue to fool us.