When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well—i.e. most of the macroscopic world. Likewise for special and general relativity—they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work. Statistical mechanics had to reproduce the fluid theory of heat; Maxwell’s equations had to agree with more specific equations governing static electricity, currents, magnetic fields and light under various conditions.
Even if the entire universe undergoes some kind of phase change tomorrow and the macroscopic physical laws change entirely, it would still be true that the old laws did work before the phase change. Any new theory and any new theory would still have to be consistent with the old laws working, where and when they actually did work.
This is Egan’s Law: it all adds up to normality. When new theory/data comes along, the old theories are still just as true as they always were. New models must reproduce the old in all the places where the old models worked; otherwise the new models are incorrect, at least in the places where the old models work and the new models disagree with them.
It really seems like this should be not just a Law, but a Theorem.
I imagine Egan’s Theorem would go something like this. We find a certain type of pattern in some data. The pattern is highly unlikely to arise by chance, or allows significant compression of the data, or something along those lines. Then the theorem would say that, in any model of the data, either:
The model has some property (corresponding to the pattern), or
The model is “wrong” or “incomplete” in some sense—e.g. we can construct a strictly better model, or show that the model consistently fails to predict the pattern, or something like that.
The meat of such a theorem would be finding classes of patterns which imply model-properties less trivial than just “the model must predict the pattern”—i.e. patterns which imply properties we actually care about. Structural properties like e.g. (approximate) conditional independencies seem particularly relevant, as well as properties involving abstractions/embedded submodels (in which case the theorem should tell how to find the abstraction/embedding).
Does anyone know of theorems like that? Maybe this is equivalent to some standard property in statistics and I’m just overthinking it?
[Question] Egan’s Theorem?
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well—i.e. most of the macroscopic world. Likewise for special and general relativity—they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work. Statistical mechanics had to reproduce the fluid theory of heat; Maxwell’s equations had to agree with more specific equations governing static electricity, currents, magnetic fields and light under various conditions.
Even if the entire universe undergoes some kind of phase change tomorrow and the macroscopic physical laws change entirely, it would still be true that the old laws did work before the phase change. Any new theory and any new theory would still have to be consistent with the old laws working, where and when they actually did work.
This is Egan’s Law: it all adds up to normality. When new theory/data comes along, the old theories are still just as true as they always were. New models must reproduce the old in all the places where the old models worked; otherwise the new models are incorrect, at least in the places where the old models work and the new models disagree with them.
It really seems like this should be not just a Law, but a Theorem.
I imagine Egan’s Theorem would go something like this. We find a certain type of pattern in some data. The pattern is highly unlikely to arise by chance, or allows significant compression of the data, or something along those lines. Then the theorem would say that, in any model of the data, either:
The model has some property (corresponding to the pattern), or
The model is “wrong” or “incomplete” in some sense—e.g. we can construct a strictly better model, or show that the model consistently fails to predict the pattern, or something like that.
The meat of such a theorem would be finding classes of patterns which imply model-properties less trivial than just “the model must predict the pattern”—i.e. patterns which imply properties we actually care about. Structural properties like e.g. (approximate) conditional independencies seem particularly relevant, as well as properties involving abstractions/embedded submodels (in which case the theorem should tell how to find the abstraction/embedding).
Does anyone know of theorems like that? Maybe this is equivalent to some standard property in statistics and I’m just overthinking it?