Third reason “patterns not holding” is less central an issue than it might seem: the Generalized Correspondence Principle. When quantum mechanics or general relativity came along, they still had to agree with classical mechanics in all the (many) places where classical mechanics worked. More generally: if some pattern in fact holds, then it will still be true that the pattern held under the original context even if later data departs from the pattern, and typically the pattern will generalize in some way to the new data. Prototypical example: maybe in the blegg/rube example, some totally new type of item is introduced, a gold donut (“gonut”). And then we’d have a whole new cluster, but the two old clusters are still there; the old pattern is still present in the environment.
While a trivial version of something like this holds true, the Correspondence principle doesn’t apply everywhere, and while there are 2 positive results on a correspondence theorem holding, there is a negative result stating that the correspondence principle is false in the general case of physical laws/rules whose only requirement is that they be Turing-computable, which means that there’s no way to make theories all add up to normality in all cases.
If we had evolved in an environment in which the only requirement on physical laws/rules was that they are Turing computable (and thus that they didn’t have a lot of symmetries or conservation laws or natural abstractions), then in general the only way to make predictions is to do roughly as much computation as your environment is doing. This generally requires your brain to be roughly equal in computational capacity, and thus similar in size, to the entire rest of its environment (including its body). This is not an environment in which the initial evolution of life is viable (nor, indeed, any form of reproduction). So, to slightly abuse the anthropic principle, we don’t need to worry about it.
Maybe, but if the environment admits NP or PSPACE oracles like this model, you can just make predictions while still being way smaller than your environment again, because you can now just do bounded Solomonoff induction to infer what the universe is like:
While a trivial version of something like this holds true, the Correspondence principle doesn’t apply everywhere, and while there are 2 positive results on a correspondence theorem holding, there is a negative result stating that the correspondence principle is false in the general case of physical laws/rules whose only requirement is that they be Turing-computable, which means that there’s no way to make theories all add up to normality in all cases.
More here:
https://www.lesswrong.com/posts/XMGWdfTC7XjgTz3X7/a-correspondence-theorem-in-the-maximum-entropy-framework
https://www.lesswrong.com/posts/FWuByzM9T5qq2PF2n/a-correspondence-theorem
https://www.lesswrong.com/posts/74crqQnH8v9JtJcda/egan-s-theorem#oZNLtNAazf3E5bN6X
https://www.lesswrong.com/posts/74crqQnH8v9JtJcda/egan-s-theorem#M6MfCwDbtuPuvoe59
https://www.lesswrong.com/posts/74crqQnH8v9JtJcda/egan-s-theorem#XQDrXyHSJzQjkRDZc
If we had evolved in an environment in which the only requirement on physical laws/rules was that they are Turing computable (and thus that they didn’t have a lot of symmetries or conservation laws or natural abstractions), then in general the only way to make predictions is to do roughly as much computation as your environment is doing. This generally requires your brain to be roughly equal in computational capacity, and thus similar in size, to the entire rest of its environment (including its body). This is not an environment in which the initial evolution of life is viable (nor, indeed, any form of reproduction). So, to slightly abuse the anthropic principle, we don’t need to worry about it.
Maybe, but if the environment admits NP or PSPACE oracles like this model, you can just make predictions while still being way smaller than your environment again, because you can now just do bounded Solomonoff induction to infer what the universe is like:
https://arxiv.org/abs/0808.2669