My intuition is that both P vs NP and Collatz are hard for a similar reason: they’re asking about arbitrary problems, without any interesting special structure.
The vast majority of our math is developed to handle problems with some special structure—symmetry, linearity, locality, etc. P vs NP is asking “ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?”. And Collatz is just a random-ass problem which doesn’t seem to have any special structure to it.
If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about. Our insights need to be maximally generalizable, in some sense. In the case of Collatz, there might exist some special trick for it, but it’s not any of the tricks we know. In the case of P vs NP, the problem itself is directly about very general problem-solving.
I disagree. Any given arbitrary problem is an instance of some broader set of problems, such that knowing how to solve that arbitrary problem allows you to trivially solve any other problem in that broader set. Conversely, if you can’t trivially solve it, that means you don’t understand the entirety of some fairly broad segment of concept-space.
In that sense, there’s no problems “without any interesting special structure” that are hard to solve relative to some conceptual toolbox. If you can’t trivially solve it, there are interesting structures to be uncovered on the way to solving it. (“Triviality” is a variable here, too. The more non-trivial solving a problem is, the higher the distance between your tools and the tools needed to solve it; the more ignorant you are, and so the more interesting structures you can uncover on the way to solving it.)
NP-complete problems are… I wanted to say “the exception”, but they’re not, really. They’re not conceptually hard to solve (unless it turns out P=NP after all, but surely not), just computationally so.
Edit: Again, Fermat’s Last Theorem is a good example. It’s the most random-ass problem among random-ass problems, but the solution to it required developing some broadly applicable tools.
And Collatz is just a random-ass problem which doesn’t seem to have any special structure to it.
The Collatz Conjecture has a lot of structure to it:
It is false for 3n-1
It is undecidable for generalizations of it
It is true empirically, up to 10^20
It is true statistically, a result of Terrence Tao establishes that “almost all initial values of n eventually iterate to less than log(log(log(log(n))))” (or inverse Ackermann)
In the case of Collatz, there might exist some special trick for it, but it’s not any of the tricks we know.
I am not sure what you counts or doesn’t count as a trick. Is it only specific proof techniques, or entire fields (eg, number theory) too?
P vs NP is asking “ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?”
Isn’t this true of most already proven theorems in complexity theory too? Among other things:
We can prove the separation of P vs EXPTIME
We can not prove the separation of P vs PSPACE (PSPACE! Never mind NP)
We can prove that PSPACE = APTIME = IP = QIP
If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about.
While I do not think this actually applies to P vs NP or the Collatz Conjecture, I think it actually applies to my meta-question, which is something like “Why has no one actually proven that those 2 problems are Super Hard? Or not even proven it, just gave strong heuristic arguments for why they should be Super Hard”.
Sounds reasonable that to answer such a question, even if those problems do have some special structure, you need insights into problem-solving in general.
My intuition is that both P vs NP and Collatz are hard for a similar reason: they’re asking about arbitrary problems, without any interesting special structure.
The vast majority of our math is developed to handle problems with some special structure—symmetry, linearity, locality, etc. P vs NP is asking “ok, but how hard is it to solve arbitrary problems without any special structure, other than the ability to check a solution efficiently?”. And Collatz is just a random-ass problem which doesn’t seem to have any special structure to it.
If we want to make progress on arbitrary problems without any special structure, then we need insights into problem-solving in general, not just the myriad of special cases which we usually think about. Our insights need to be maximally generalizable, in some sense. In the case of Collatz, there might exist some special trick for it, but it’s not any of the tricks we know. In the case of P vs NP, the problem itself is directly about very general problem-solving.
I disagree. Any given arbitrary problem is an instance of some broader set of problems, such that knowing how to solve that arbitrary problem allows you to trivially solve any other problem in that broader set. Conversely, if you can’t trivially solve it, that means you don’t understand the entirety of some fairly broad segment of concept-space.
In that sense, there’s no problems “without any interesting special structure” that are hard to solve relative to some conceptual toolbox. If you can’t trivially solve it, there are interesting structures to be uncovered on the way to solving it. (“Triviality” is a variable here, too. The more non-trivial solving a problem is, the higher the distance between your tools and the tools needed to solve it; the more ignorant you are, and so the more interesting structures you can uncover on the way to solving it.)
NP-complete problems are… I wanted to say “the exception”, but they’re not, really. They’re not conceptually hard to solve (unless it turns out P=NP after all, but surely not), just computationally so.
Edit: Again, Fermat’s Last Theorem is a good example. It’s the most random-ass problem among random-ass problems, but the solution to it required developing some broadly applicable tools.
The Collatz Conjecture has a lot of structure to it:
It is false for 3n-1
It is undecidable for generalizations of it
It is true empirically, up to 10^20
It is true statistically, a result of Terrence Tao establishes that “almost all initial values of n eventually iterate to less than log(log(log(log(n))))” (or inverse Ackermann)
I am not sure what you counts or doesn’t count as a trick. Is it only specific proof techniques, or entire fields (eg, number theory) too?
Isn’t this true of most already proven theorems in complexity theory too? Among other things:
We can prove the separation of P vs EXPTIME
We can not prove the separation of P vs PSPACE (PSPACE! Never mind NP)
We can prove that PSPACE = APTIME = IP = QIP
While I do not think this actually applies to P vs NP or the Collatz Conjecture, I think it actually applies to my meta-question, which is something like “Why has no one actually proven that those 2 problems are Super Hard? Or not even proven it, just gave strong heuristic arguments for why they should be Super Hard”.
Sounds reasonable that to answer such a question, even if those problems do have some special structure, you need insights into problem-solving in general.