The simulator can maintain conservation of e.g. mass, while not churning through the computations required for e.g. gravity until people see enough that they can check if gravity isn’t holding.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
On second thought, that doesn’t work either, since discovery of gravitational laws will constrain their existing predictions of where the planets will be, and this destruction of entropy is unrelated to the entropy needed to create it, which was your objection to begin with.
My best guess at this point is that any resolution will ultimately hinge on a finer-grained information-theoretic analysis of the discovery of universal laws. That is, as you gain evidence pointing to the validity of laws you notice, you assign a high-but-not-unity probability to the laws continuing to hold. Each time your probability goes up, that corresponds to a particular reduction in the entropy of your probability distribution.
But, as they say, “to make inferences you have to make assumptions”. There is some entropic cost to making the assumptions necessary for the model with invariants to work, and this must be properly accounted for. I’ll continue to research this.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
This is wrong (even assuming that previous coarse-grained observations don’t matter). If you are changing the model by refining it, choosing one option of more detailed data arbitrarily, then this process on the world-model isn’t reversible: you can’t “un-choose” that arbitrary data and remain able to reconstruct it (unless the data is not arbitrary after all and only depends on the world model that is already there). As a result, no magical increase in entropy occurs, and no resources get saved: it’s not an operation on the subsystems within the modeled world, it’s an operation on the system of whole-world model within the world of modelers.
Also, consider the fact that ultimate laws can never be discovered, strictly speaking: there will always be uncertainty, and maybe there won’t even be asymptotically certain candidates, only turtles always deeper and deeper.
The simulator can maintain conservation of e.g. mass, while not churning through the computations required for e.g. gravity until people see enough that they can check if gravity isn’t holding.
This would save on having to do the gravity calculations. Then, when people, armed with their knowledge of gravity, start looking in more places, the universe must pick a configuration and stick with it—but at that point, all of their observations have the original problem of freeing up memory somewhere else in the form of higher entropy.
On second thought, that doesn’t work either, since discovery of gravitational laws will constrain their existing predictions of where the planets will be, and this destruction of entropy is unrelated to the entropy needed to create it, which was your objection to begin with.
My best guess at this point is that any resolution will ultimately hinge on a finer-grained information-theoretic analysis of the discovery of universal laws. That is, as you gain evidence pointing to the validity of laws you notice, you assign a high-but-not-unity probability to the laws continuing to hold. Each time your probability goes up, that corresponds to a particular reduction in the entropy of your probability distribution.
But, as they say, “to make inferences you have to make assumptions”. There is some entropic cost to making the assumptions necessary for the model with invariants to work, and this must be properly accounted for. I’ll continue to research this.
This is wrong (even assuming that previous coarse-grained observations don’t matter). If you are changing the model by refining it, choosing one option of more detailed data arbitrarily, then this process on the world-model isn’t reversible: you can’t “un-choose” that arbitrary data and remain able to reconstruct it (unless the data is not arbitrary after all and only depends on the world model that is already there). As a result, no magical increase in entropy occurs, and no resources get saved: it’s not an operation on the subsystems within the modeled world, it’s an operation on the system of whole-world model within the world of modelers.
Also, consider the fact that ultimate laws can never be discovered, strictly speaking: there will always be uncertainty, and maybe there won’t even be asymptotically certain candidates, only turtles always deeper and deeper.