Does your 2nd sentence hold recursively? And could you explain why that is the case?
I can easily imagine mathematical problems that are non-obviously not answerable for me or for human beings much smarter than I am, just as the Continuum Hypothesis was for human beings before Godel and Cohen, but I can also easily imagine an intelligence that would, for example, nearly instantly notice Godel’s incompleteness theorems as obvious consequences of the Peano axioms.
That is not literally what the original quote says, but an intelligence that could, for example, ‘learn’ our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.
Assume there is either a proof for, against or for independence for every statement within the Peano axiom system.
For every program P you could create a statement S(P)=”does this program ever halt?”. Now you could solve the halting problem: For given P, iterate over every proof and check whether it is a proof for either S(P), not S(P) or S(P) is independent. Once the machine finds a proof for not S(P) or S(P) is independent, it stops with false. If it proves S(P), it stops with true. It will necessarily stop according to your original assumption.
This works even if you would look for an arbitrarily long chain of “it is not provable that not provable that not provable …. that not provable that S”, since every finite proof has to be checked for only a finite number of chains. (Since proof must be longer than the statement)
That is not literally what the original quote says, but an intelligence that could, for example, ‘learn’ our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.
That is plausible. But as you mentioned, the original quote, catchy though, was different and unfortunately (unlike most philosophical statements) formally wrong.
Thanks for the concrete example. I do still think though that there is something of value in the quote that may be worth salvaging, perhaps by restricting it to solvable problems of the sort that we care about most (such as scientific and mathematical advances) and emphasizing not that everything is obvious to some conceivable intelligence but that most of what is currently not obvious to us and that we perceive as difficult is not obvious and is perceived as difficult not because it is intrinsically so but because we are so limited.
The heart of the quote for me is that instead of hardness being a function of one argument (in Haskell notation):
difficulty :: Problem → PositiveReal
it is a function of two arguments:
difficulty :: Mind → Problem → PositiveReal
And that most of the interesting real-world Problem instances that map to very large numbers for us (i.e. the difficult ones that we will eventually solve if we survive long enough) are problems that would be deemed obvious to sufficiently intelligent minds.
And that is a worthwhile insight, even if the function is not defined for all problems, even if there are solvable problems for which there are no physically possible minds that would yield “difficulty p m” (or “difficulty(p, m)” in Python syntax) being a small enough number to fall below the threshold of obviousness, and even if for any possible mind we can find solvable problems that have arbitrarily high difficulty.
It’s justified. If a machine halts, then there’s a proof of that in PA (simply the list of steps it performs before halting). Therefore, “independent of PA” implies “doesn’t halt”. Therefore, a provability decider is the same as a halting decider.
Christian’s argument shows there are always problems that can’t be solved or shown to be unsolvable. But even if you ignore them and look at finitely solvable problems (or provable theorems in PA), there’s no upper bound on the time or processing power needed to solve a randomly chosen problem. Not even for the “smartest” intelligence allowed by the laws of physics (i.e. the one ideally optimized to solve the chosen problem).
Does your 2nd sentence hold recursively? And could you explain why that is the case?
I can easily imagine mathematical problems that are non-obviously not answerable for me or for human beings much smarter than I am, just as the Continuum Hypothesis was for human beings before Godel and Cohen, but I can also easily imagine an intelligence that would, for example, nearly instantly notice Godel’s incompleteness theorems as obvious consequences of the Peano axioms.
That is not literally what the original quote says, but an intelligence that could, for example, ‘learn’ our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.
Assume there is either a proof for, against or for independence for every statement within the Peano axiom system.
For every program P you could create a statement S(P)=”does this program ever halt?”. Now you could solve the halting problem: For given P, iterate over every proof and check whether it is a proof for either S(P), not S(P) or S(P) is independent. Once the machine finds a proof for not S(P) or S(P) is independent, it stops with false. If it proves S(P), it stops with true. It will necessarily stop according to your original assumption.
This works even if you would look for an arbitrarily long chain of “it is not provable that not provable that not provable …. that not provable that S”, since every finite proof has to be checked for only a finite number of chains. (Since proof must be longer than the statement)
That is plausible. But as you mentioned, the original quote, catchy though, was different and unfortunately (unlike most philosophical statements) formally wrong.
Thanks for the concrete example. I do still think though that there is something of value in the quote that may be worth salvaging, perhaps by restricting it to solvable problems of the sort that we care about most (such as scientific and mathematical advances) and emphasizing not that everything is obvious to some conceivable intelligence but that most of what is currently not obvious to us and that we perceive as difficult is not obvious and is perceived as difficult not because it is intrinsically so but because we are so limited.
The heart of the quote for me is that instead of hardness being a function of one argument (in Haskell notation):
it is a function of two arguments:
And that most of the interesting real-world Problem instances that map to very large numbers for us (i.e. the difficult ones that we will eventually solve if we survive long enough) are problems that would be deemed obvious to sufficiently intelligent minds.
And that is a worthwhile insight, even if the function is not defined for all problems, even if there are solvable problems for which there are no physically possible minds that would yield “difficulty p m” (or “difficulty(p, m)” in Python syntax) being a small enough number to fall below the threshold of obviousness, and even if for any possible mind we can find solvable problems that have arbitrarily high difficulty.
There’s a flaw in your proof—the step from “The machine will necessarily stop” to “This solves the halting problem” is unjustified.
Despite the flaw, I agree with your general conclusion—there are and will be challenging as well as unsolvable problems.
It’s justified. If a machine halts, then there’s a proof of that in PA (simply the list of steps it performs before halting). Therefore, “independent of PA” implies “doesn’t halt”. Therefore, a provability decider is the same as a halting decider.
With the added steps, yes, there is a proof.
Christian’s argument shows there are always problems that can’t be solved or shown to be unsolvable. But even if you ignore them and look at finitely solvable problems (or provable theorems in PA), there’s no upper bound on the time or processing power needed to solve a randomly chosen problem. Not even for the “smartest” intelligence allowed by the laws of physics (i.e. the one ideally optimized to solve the chosen problem).
Thanks for the clear explanation. My response below to Christian is relevant here also.