If someone wrote down a version of the laws of physics that wasn’t computable, you wouldn’t be able to use it to compute any predictions, so no one would use such laws.
I don’t think that’s entirely true. Consider a well-defined real number which is uncomputable yet approximations of it can be computed, such as Chaitin’s omega; now imagine a laws of physics which uses omega somewhere in it (perhaps as a physical constant). The full laws are uncomputable due to the inclusion of omega, yet you could compute a finite prefix of omega and make your predictions with that. You could even show that the laws are not just that finite prefix by computing further digits into omega and demonstrating that additional digits yields additional predictive accuracy.
The full laws are uncomputable due to the inclusion of omega, yet you could compute a finite prefix of omega
You can’t compute a prefix of Chaitin’s omega of any arbitrary length. You can compute prefixes only up to some finite length, and this length is itself uncomputable.
From our perspective, the length which it is computable is going to be arbitrary, and until we hit it, at each digit we will confront the same epistemic problem: “is the prefix of omega that seems to be embedded in our particular physics a finite computable prefix of omega and so perfectly computable and so our physics is perfectly computable, or is this a genuine uncomputable constant buried in our physics?”
This has been discussed in the past as the ‘oracle hypercomputation’ problem: suppose you found a device which claimed to be an oracle for Turing machines halting, and you test it out and it seems to be accurate on the n Turing machines you are able to run to the point where they either halt or loop their state. How much, if at all, do you credit its claim to be an oracle doing hypercomputation? Since, after all, it could just be a random number generator, and in 1 out of the 2^n possible outputs its predictions will be completely correct ‘by chance’, or it could be using heuristics or something. What is one’s prior for hypercomputation/oracles being possible and how much does one update?
This was discussed on the advanced decision theory ML a while back, IIRC, and I don’t think they came to any solid conclusion either way.
I see a problem with this: There doesn’t seem to be a way to tell if omega itself is in the laws of physics or some finite precision approximation to omega. Given any set of finite observable phenomena and any finite amount of time there will be some finite precision approximation to any real number which is sufficient in the equations to explain all observations, assuming the models used are otherwise correct and otherwise computable. How would one tell if the universe uses the real value Pi or a finite precision version of Pi whose finiteness is epsilon greater then what is needed to calculate any observable value?
How would one tell if the universe uses the real value Pi or a finite precision version of Pi whose finiteness is epsilon greater then what is needed to calculate any observable value?
How does one know the laws of physics won’t suddenly change tomorrow, i.e., how does one distinguish a universe governed by a certain set of laws with one governed by an approximation of the same laws that stops working on a certain day?
How would one tell if the universe uses the real value Pi or a finite precision version of Pi whose finiteness is epsilon greater then what is needed to calculate any observable value?
You can’t. However, if you somehow found an encoding of a physical constant that was highly compressible, such as 1.379[50 digits]0000000000000000, or some other sort of highly regular series, it would be strong evidence towards our universe being both computable and, indeed, computed. (No such constant has yet been found, but we haven’t looked very hard yet)
Depending on what you mean by “constant”… The exponent in Coulomb’s law is 2. To about 13 decimal places. I would expect some of the similar constants in other formulas to be comparably compressible.
I don’t think that’s entirely true. Consider a well-defined real number which is uncomputable yet approximations of it can be computed, such as Chaitin’s omega; now imagine a laws of physics which uses omega somewhere in it (perhaps as a physical constant). The full laws are uncomputable due to the inclusion of omega, yet you could compute a finite prefix of omega and make your predictions with that. You could even show that the laws are not just that finite prefix by computing further digits into omega and demonstrating that additional digits yields additional predictive accuracy.
You can’t compute a prefix of Chaitin’s omega of any arbitrary length. You can compute prefixes only up to some finite length, and this length is itself uncomputable.
From our perspective, the length which it is computable is going to be arbitrary, and until we hit it, at each digit we will confront the same epistemic problem: “is the prefix of omega that seems to be embedded in our particular physics a finite computable prefix of omega and so perfectly computable and so our physics is perfectly computable, or is this a genuine uncomputable constant buried in our physics?”
This has been discussed in the past as the ‘oracle hypercomputation’ problem: suppose you found a device which claimed to be an oracle for Turing machines halting, and you test it out and it seems to be accurate on the n Turing machines you are able to run to the point where they either halt or loop their state. How much, if at all, do you credit its claim to be an oracle doing hypercomputation? Since, after all, it could just be a random number generator, and in 1 out of the 2^n possible outputs its predictions will be completely correct ‘by chance’, or it could be using heuristics or something. What is one’s prior for hypercomputation/oracles being possible and how much does one update?
This was discussed on the advanced decision theory ML a while back, IIRC, and I don’t think they came to any solid conclusion either way.
I see a problem with this: There doesn’t seem to be a way to tell if omega itself is in the laws of physics or some finite precision approximation to omega. Given any set of finite observable phenomena and any finite amount of time there will be some finite precision approximation to any real number which is sufficient in the equations to explain all observations, assuming the models used are otherwise correct and otherwise computable. How would one tell if the universe uses the real value Pi or a finite precision version of Pi whose finiteness is epsilon greater then what is needed to calculate any observable value?
How does one know the laws of physics won’t suddenly change tomorrow, i.e., how does one distinguish a universe governed by a certain set of laws with one governed by an approximation of the same laws that stops working on a certain day?
You can’t. However, if you somehow found an encoding of a physical constant that was highly compressible, such as 1.379[50 digits]0000000000000000, or some other sort of highly regular series, it would be strong evidence towards our universe being both computable and, indeed, computed. (No such constant has yet been found, but we haven’t looked very hard yet)
Depending on what you mean by “constant”… The exponent in Coulomb’s law is 2. To about 13 decimal places. I would expect some of the similar constants in other formulas to be comparably compressible.