The general No-Coincidence principle is almost certainly false. There are lots of patterns in math that hold for a long time before breaking (e.g. Skewe’s Number) and there are lots of things that require astronomically large proofs (e.g Godel’s speed-up theorem). It would be an enormous coincidence for both of these cases to never occur at once.
I have no reason to think your particular formalization would fare better.
For the informal no-coincidence principle, it’s important to us (and to Gowers IIUC) that a “reason” is not necessarily a proof, but could instead be a heuristic argument (in the sense of this post). We agree there are certainly apparently outrageous coincidences that may not be provable, such as Chebyshev’s bias (discussed in the introduction to the post). See also John Conway’s paper On Unsettleable Arithmetical Problems for a nice exposition of the distinction between proofs and heuristic arguments (he uses the word “probvious” for a statement with a convincing heuristic argument).
Correspondingly, our formalization doesn’t bake in any sort of proof system. The verification algorithm V only has to correctly distinguish circuits that might satisfy property P from random circuits using the advice string π – it doesn’t necessarily have to interpret π as a proof and verify its correctness.
I doubt that weakening from formal proof to heuristic saves the conjecture. Instead I lean towards Stephen Wolfram’s Computational Irreducibly view of math. Some things are true simply because they are true and in general there’s no reason to expect a simpler explanation.
In order to reject this you would either have to assert:
a) Wolfram is wrong and there are actually deep reasons why simple systems behave precisely the way they do
or b) For some reason computational irreducibly applies to simple things but not to infinite sets of the type mathematicians tend to be interested in.
I should also clarify that in a certain sense I do believe b). I believe that pi is normal because something very fishy would have to be happening for it to not be.
However, I don’t think this holds in general.
With Collatz, for example, we are already getting close to the hairy “just so” Turing machine like behavior where you would expect the principle to fail.
Certainly, if one were to collect all the Collatz-like systems that arise from Turing Machines I would expect some fraction of them to fail the no-coincidence principle.
Some things are true simply because they are true and in general there’s no reason to expect a simpler explanation.
You could believe:
Some things are true simply because they are true, but only when being true isn’t very surprising. (For instance, it isn’t very surprising that there are some cellular automata that live for 100 steps or that any particular cellular automata lives for 100 steps.)
However, things which are very surprising and don’t have a relatively compact explanation are exponentionally rare. And, in the case where something is infinitely surprising (e.g., if the digits of pi weren’t normal), there will exist a finite explanation.
It sounds like you agree “if a Turing machine goes for 100 steps and then stops” this is ordinary and we shouldn’t expect an explanation. But also believe “if pi is normal for 10*40 digits and then suddenly stops being normal this is a rare and surprising coincidence for which there should be an explanation”.
And in the particular case of pi I agree with you.
But if you start using this principle in general it is not going to work out well for you. Most simple to describe sequences that suddenly stop aren’t going to have nice pretty explanations.
Or to put it another way: the number of things which are nice (like pi) are dramatically outnumbered by the number of things that are arbitrary (like cellular automata that stop after exactly 100 steps).
I would absolutely love if there was some criteria that I could apply to tell me whether something is nice or arbitrary, but the Halting Problem forbids this. The best we can do is mathematical taste. If mathematicians have been studying something for a long time and it really does seem nice, there is a good chance it is.
The hope is to use the complexity of the statement rather than mathematical taste.
If it takes me 10 bits to specify a computational possibility that ought to happen 1% of the time, then we shouldn’t be surprised to find around 10 (~1% of 210) occurrences. We don’t intend the no-coincidence principle to claim that these should all happen for a reason.
Instead, we intend the no-coincidence principle to claim that such if such coincidences happen much more often than we would have expected them to by chance, then there is a reason for that. Or put another way: if we applied n bits of selection to the statement of a ≪2−n-level coincidence, then there is a reason for it. (Hopefully the “outrageous” qualifier helps to indicate this, although we don’t know whether Gowers meant quite same thing as us.)
The formalization reflects this distinction: the property P is chosen to be so unlikely that we wouldn’t expect it to happen for any circuit at all by chance (e−2−n), not merely that we wouldn’t expect it to happen for a single random circuit. Hence by the informal principle, there ought to be a reason for any occurrence of property P.
The hope is to use the complexity of the statement rather than mathematical taste.
I understand the hope, I just think it’s going to fail (for more or less the same reason it fails with formal proof).
With formal proof, we have Godel’s speedup, which tells us that you can turn a Godel statement in a true statement with a ridiculously long proof.
You attempt to get around this by replacing formal proof with “heuristic”, but whatever your heuristic system, it’s still going to have some power (in the Turing hierarchy sense) and some Godel statement. That Godel statement is in turn going to result in a “seeming coincidence”.
Wolfram’s observation is that this isn’t some crazy exception, this is the rule. Most true statements in math are pretty arbitrary and don’t have shorter explanations than “we checked it and its true”.
The reason why mathematical taste works is that we aren’t dealing with “most true statements”, we’re only dealing with statements that have particular beauty or interest to Mathematicians.
It may seem like cheating to say that human mathematicians can do something that literally no formal mathematical system can do. But if you truly believe that, the correct response would be to respond when asked “is pi normal” with “I don’t know”.
The reason why your intuition is throwing you off is because you keep thinking of coincidences as “pi is normal” and not “we picked an arbitrary CA with 15k bits of complexity and ran it for 15k steps but it didn’t stop. I guess it never terminates.”
‘taste’, ′ mathematical beauty’, ′ interesting to mathematicians’ aren’t arbitrary markers but reflect a deeper underlying structure that is, I believe, ultimately formalizable.
It does not seem unlikely to me at all that it will be possible to mathematically describe those true statements that are moreover of particular beauty or likely interest to mathematicians (human, artificial or alien).
The Godel speedup story is an interesting point. I haven’t thought deeply enough about this but IIRC the original ARC heuristic arguments has several sections on this and related topics. You might want to consult there.
The general No-Coincidence principle is almost certainly false. There are lots of patterns in math that hold for a long time before breaking (e.g. Skewe’s Number) and there are lots of things that require astronomically large proofs (e.g Godel’s speed-up theorem). It would be an enormous coincidence for both of these cases to never occur at once.
I have no reason to think your particular formalization would fare better.
For the informal no-coincidence principle, it’s important to us (and to Gowers IIUC) that a “reason” is not necessarily a proof, but could instead be a heuristic argument (in the sense of this post). We agree there are certainly apparently outrageous coincidences that may not be provable, such as Chebyshev’s bias (discussed in the introduction to the post). See also John Conway’s paper On Unsettleable Arithmetical Problems for a nice exposition of the distinction between proofs and heuristic arguments (he uses the word “probvious” for a statement with a convincing heuristic argument).
Correspondingly, our formalization doesn’t bake in any sort of proof system. The verification algorithm V only has to correctly distinguish circuits that might satisfy property P from random circuits using the advice string π – it doesn’t necessarily have to interpret π as a proof and verify its correctness.
I doubt that weakening from formal proof to heuristic saves the conjecture. Instead I lean towards Stephen Wolfram’s Computational Irreducibly view of math. Some things are true simply because they are true and in general there’s no reason to expect a simpler explanation.
In order to reject this you would either have to assert:
a) Wolfram is wrong and there are actually deep reasons why simple systems behave precisely the way they do
or
b) For some reason computational irreducibly applies to simple things but not to infinite sets of the type mathematicians tend to be interested in.
I should also clarify that in a certain sense I do believe b). I believe that pi is normal because something very fishy would have to be happening for it to not be.
However, I don’t think this holds in general.
With Collatz, for example, we are already getting close to the hairy “just so” Turing machine like behavior where you would expect the principle to fail.
Certainly, if one were to collect all the Collatz-like systems that arise from Turing Machines I would expect some fraction of them to fail the no-coincidence principle.
You could believe:
Some things are true simply because they are true, but only when being true isn’t very surprising. (For instance, it isn’t very surprising that there are some cellular automata that live for 100 steps or that any particular cellular automata lives for 100 steps.)
However, things which are very surprising and don’t have a relatively compact explanation are exponentionally rare. And, in the case where something is infinitely surprising (e.g., if the digits of pi weren’t normal), there will exist a finite explanation.
It sounds like you agree “if a Turing machine goes for 100 steps and then stops” this is ordinary and we shouldn’t expect an explanation. But also believe “if pi is normal for 10*40 digits and then suddenly stops being normal this is a rare and surprising coincidence for which there should be an explanation”.
And in the particular case of pi I agree with you.
But if you start using this principle in general it is not going to work out well for you. Most simple to describe sequences that suddenly stop aren’t going to have nice pretty explanations.
Or to put it another way: the number of things which are nice (like pi) are dramatically outnumbered by the number of things that are arbitrary (like cellular automata that stop after exactly 100 steps).
I would absolutely love if there was some criteria that I could apply to tell me whether something is nice or arbitrary, but the Halting Problem forbids this. The best we can do is mathematical taste. If mathematicians have been studying something for a long time and it really does seem nice, there is a good chance it is.
The hope is to use the complexity of the statement rather than mathematical taste.
If it takes me 10 bits to specify a computational possibility that ought to happen 1% of the time, then we shouldn’t be surprised to find around 10 (~1% of 210) occurrences. We don’t intend the no-coincidence principle to claim that these should all happen for a reason.
Instead, we intend the no-coincidence principle to claim that such if such coincidences happen much more often than we would have expected them to by chance, then there is a reason for that. Or put another way: if we applied n bits of selection to the statement of a ≪2−n-level coincidence, then there is a reason for it. (Hopefully the “outrageous” qualifier helps to indicate this, although we don’t know whether Gowers meant quite same thing as us.)
The formalization reflects this distinction: the property P is chosen to be so unlikely that we wouldn’t expect it to happen for any circuit at all by chance (e−2−n), not merely that we wouldn’t expect it to happen for a single random circuit. Hence by the informal principle, there ought to be a reason for any occurrence of property P.
I understand the hope, I just think it’s going to fail (for more or less the same reason it fails with formal proof).
With formal proof, we have Godel’s speedup, which tells us that you can turn a Godel statement in a true statement with a ridiculously long proof.
You attempt to get around this by replacing formal proof with “heuristic”, but whatever your heuristic system, it’s still going to have some power (in the Turing hierarchy sense) and some Godel statement. That Godel statement is in turn going to result in a “seeming coincidence”.
Wolfram’s observation is that this isn’t some crazy exception, this is the rule. Most true statements in math are pretty arbitrary and don’t have shorter explanations than “we checked it and its true”.
The reason why mathematical taste works is that we aren’t dealing with “most true statements”, we’re only dealing with statements that have particular beauty or interest to Mathematicians.
It may seem like cheating to say that human mathematicians can do something that literally no formal mathematical system can do. But if you truly believe that, the correct response would be to respond when asked “is pi normal” with “I don’t know”.
The reason why your intuition is throwing you off is because you keep thinking of coincidences as “pi is normal” and not “we picked an arbitrary CA with 15k bits of complexity and ran it for 15k steps but it didn’t stop. I guess it never terminates.”
‘taste’, ′ mathematical beauty’, ′ interesting to mathematicians’ aren’t arbitrary markers but reflect a deeper underlying structure that is, I believe, ultimately formalizable.
It does not seem unlikely to me at all that it will be possible to mathematically describe those true statements that are moreover of particular beauty or likely interest to mathematicians (human, artificial or alien).
The Godel speedup story is an interesting point. I haven’t thought deeply enough about this but IIRC the original ARC heuristic arguments has several sections on this and related topics. You might want to consult there.