My computer is biased toward not running at 100 petahertz and having 70 petabytes of RAM. My brain is biased toward not using so many complicated models that it needs 1 trillion neurons each with 1 million connections and firing up to 10,000 times per second.
And now for something perhaps more useful than sarcasm, it seems to me that people tend to simply come up with the consistent model that is either the easiest one to compute or the simplest one to describe. Are heuristics for inconsistency, such as “exponential growth/decay rarely occurs in nature”, quickly spread and often used? How about better approximations such as logistical growth?
Hahaha, anthropic Occam’s Razor! If a science allows simple theories that can fit in our tiny brains, we call it a good science and observe with satisfaction that it “obeys Occam”. If a science doesn’t allow simple theories, we call it a bad science and go off to play somewhere else!
Come to think of it, physics seems to be the only science where Occam’s Razor actually works. Even math is a counterexample: there’s no law of nature saying simple theorems should have short proofs, and easy-to-formulate statements like 4-color or Fermat’s last can cause huge explosions of complexity when you try to prove them.
Occam’s razor still applies. If we’re looking for the most elegant possible proof of a theorem (whatever that means), any sufficiently short proof is much more likely to be it than any sufficiently long proof. If you want to take a completely wild guess about what statement an unknown theorem proves, you’re better off guessing short statements than long ones.
If we’re looking for the most elegant possible proof of a theorem (whatever that means), any sufficiently short proof is much more likely to be it than any sufficiently long proof.
Could you try to make that statement more precise? Because I don’t believe it.
If you take the shortest possible proofs to all provable theorems of length less than N, both the maximum and the average length of those proofs will be extremely (uncomputably) fast-growing functions of N. To see that, imagine Gödel-like self-referential theorems that say “I’m not provable in less than 3^^^^3 steps” or somesuch. They’re all true (because otherwise the axiom system would prove a false statement), short and easy to formulate, trivially seen to be provable by finite enumeration, but not elegantly provable because they’re true.
Another way to reach the same conclusion: if “expected length of shortest proof” were bounded from above by some computable f(N) where N is theorem length in bits, we could write a simple algorithm that determines whether a theorem is provable: check all possible proofs up to length f(N)*2^N. if the search succeeds, say “yes”. If the search fails, the shortest proof (if it exists) must be longer than f(N)*2^N, which is impossible because that would make the average greater than f(N). Therefore no shortest proof exists, therefore no proof exists at all, so say “no”. But we know that provability cannot be decidable by an algorithm, so f(N) must grow uncomputably fast.
If we’re looking for the most elegant possible proof of a theorem (whatever that means), any sufficiently short proof is much more likely to be it than any sufficiently long proof.
Could you give a precise meaning to that statement? I can’t think of any possible meaning except “if a proof exists, it has finite length”, which is trivial. Are short proofs really more likely? Why?
My computer is biased toward not running at 100 petahertz and having 70 petabytes of RAM. My brain is biased toward not using so many complicated models that it needs 1 trillion neurons each with 1 million connections and firing up to 10,000 times per second.
And now for something perhaps more useful than sarcasm, it seems to me that people tend to simply come up with the consistent model that is either the easiest one to compute or the simplest one to describe. Are heuristics for inconsistency, such as “exponential growth/decay rarely occurs in nature”, quickly spread and often used? How about better approximations such as logistical growth?
Hahaha, anthropic Occam’s Razor! If a science allows simple theories that can fit in our tiny brains, we call it a good science and observe with satisfaction that it “obeys Occam”. If a science doesn’t allow simple theories, we call it a bad science and go off to play somewhere else!
Come to think of it, physics seems to be the only science where Occam’s Razor actually works. Even math is a counterexample: there’s no law of nature saying simple theorems should have short proofs, and easy-to-formulate statements like 4-color or Fermat’s last can cause huge explosions of complexity when you try to prove them.
Occam’s razor still applies. If we’re looking for the most elegant possible proof of a theorem (whatever that means), any sufficiently short proof is much more likely to be it than any sufficiently long proof. If you want to take a completely wild guess about what statement an unknown theorem proves, you’re better off guessing short statements than long ones.
Could you try to make that statement more precise? Because I don’t believe it.
If you take the shortest possible proofs to all provable theorems of length less than N, both the maximum and the average length of those proofs will be extremely (uncomputably) fast-growing functions of N. To see that, imagine Gödel-like self-referential theorems that say “I’m not provable in less than 3^^^^3 steps” or somesuch. They’re all true (because otherwise the axiom system would prove a false statement), short and easy to formulate, trivially seen to be provable by finite enumeration, but not elegantly provable because they’re true.
Another way to reach the same conclusion: if “expected length of shortest proof” were bounded from above by some computable f(N) where N is theorem length in bits, we could write a simple algorithm that determines whether a theorem is provable: check all possible proofs up to length f(N)*2^N. if the search succeeds, say “yes”. If the search fails, the shortest proof (if it exists) must be longer than f(N)*2^N, which is impossible because that would make the average greater than f(N). Therefore no shortest proof exists, therefore no proof exists at all, so say “no”. But we know that provability cannot be decidable by an algorithm, so f(N) must grow uncomputably fast.
Could you give a precise meaning to that statement? I can’t think of any possible meaning except “if a proof exists, it has finite length”, which is trivial. Are short proofs really more likely? Why?
More emphasis on the most elegant possible.
Sorry for deleting my comment, I got frustrated and rewrote it. See my other reply to grandparent.
I don’t believe it either, by the way.