All it means is that there can be programs so badly written that it is impossible to say what they do.
It’s true that the result is “there exists programs such that”, and it says nothing about the relative sizes of those that can be proved to terminate vs those that can’t in the subset we care about and are likely to write.
However, if programs are written for some purpose,
Sometimes that purpose is to test mathematical conjectures. In that case, the straightforward and clear way to write the program actually is the one used, and is often apparently does fall into this subclass.
(And then there are all the programs that aren’t designed to normally terminate, and then we should be thinking about codata and coinduction.)
Sometimes that purpose is to test mathematical conjectures.
Upvoted because I think this best captures what the halting problem actually means. The programs for which we really want to know if they halt aren’t recursive functions that might be missing a base case or something. Rather, they’re like the program that enumerates the non-trivial zeroes of the Riemann zeta function until it finds one with real part not equal to 1⁄2.
If we could test whether such programs halted, we would be able to solve any mathematical existence problem ever that we could appropriately describe to a computer.
People researching program verification for purposes of improving software engineering really are interested in the kinds of mistakes programmers make, like “recursive functions that might be missing a base case or something”. Or like unbounded iteration for which we can prove termination, but really ought to double-check that proof.
Unbounded iteration for which we cannot (currently) prove termination may be of interest to mathematicians, but even a mathematician isn’t stupid enough to attempt to resolve the Zeta function conjecture by mapping the problem to a question about halting computer programs. That is a step in the wrong direction.
And if we build an AGI that attempts to resolve the conjecture by actually enumerating the roots in order, then I think that we have an AI that is either no longer rational or has too much time on its hands.
I suppose I wasn’t precise enough. Actual researchers in that area are, rightly, interested in more-or-less reasonable halting questions. But these don’t contradict our intuition about what halting problems are. In fact, I would guess that a large class of reasonable programming mistakes can be checked algorithmically.
But as far as consequences for there being, hypothetically, a solution to the general halting problem… the really significant result would be the existence of a general “problem solving algorithm”. Sure, Turing’s undecidability problem proves that the halting problem is undecidable… but this is my best attempt at capturing the intuition of why it’s not even a little bit close to being decidable.
Of course, even in a world where we had a halting solver, we probably wouldn’t want to use it to prove the Riemann Hypothesis. But your comment actually reminds me of a clever little problem I once saw. It went as follows:
If someone ever proves that P=NP, it’s possible the proof will be non-constructive: it won’t actually show how to solve an NP-complete problem in polynomial time. However, this doesn’t actually matter. Write an algorithm for solving SAT (or any other NP-complete problem) with the following property: if P=NP, then your algorithm should run in polynomial time.
(the connection to your comment is that even if P=NP, we wouldn’t actually use the solution to the above problem in practical applications. We’d try our best to find something better)
That’s a great question. I haven’t found anything on a brief search, but it seems like we can fairly readily embed a normal program inside a coinductive-style one and have it be productive after the normal program terminates.
It’s true that the result is “there exists programs such that”, and it says nothing about the relative sizes of those that can be proved to terminate vs those that can’t in the subset we care about and are likely to write.
Sometimes that purpose is to test mathematical conjectures. In that case, the straightforward and clear way to write the program actually is the one used, and is often apparently does fall into this subclass.
(And then there are all the programs that aren’t designed to normally terminate, and then we should be thinking about codata and coinduction.)
Upvoted because I think this best captures what the halting problem actually means. The programs for which we really want to know if they halt aren’t recursive functions that might be missing a base case or something. Rather, they’re like the program that enumerates the non-trivial zeroes of the Riemann zeta function until it finds one with real part not equal to 1⁄2.
If we could test whether such programs halted, we would be able to solve any mathematical existence problem ever that we could appropriately describe to a computer.
What you mean ‘we,’ white man?
People researching program verification for purposes of improving software engineering really are interested in the kinds of mistakes programmers make, like “recursive functions that might be missing a base case or something”. Or like unbounded iteration for which we can prove termination, but really ought to double-check that proof.
Unbounded iteration for which we cannot (currently) prove termination may be of interest to mathematicians, but even a mathematician isn’t stupid enough to attempt to resolve the Zeta function conjecture by mapping the problem to a question about halting computer programs. That is a step in the wrong direction.
And if we build an AGI that attempts to resolve the conjecture by actually enumerating the roots in order, then I think that we have an AI that is either no longer rational or has too much time on its hands.
I suppose I wasn’t precise enough. Actual researchers in that area are, rightly, interested in more-or-less reasonable halting questions. But these don’t contradict our intuition about what halting problems are. In fact, I would guess that a large class of reasonable programming mistakes can be checked algorithmically.
But as far as consequences for there being, hypothetically, a solution to the general halting problem… the really significant result would be the existence of a general “problem solving algorithm”. Sure, Turing’s undecidability problem proves that the halting problem is undecidable… but this is my best attempt at capturing the intuition of why it’s not even a little bit close to being decidable.
Of course, even in a world where we had a halting solver, we probably wouldn’t want to use it to prove the Riemann Hypothesis. But your comment actually reminds me of a clever little problem I once saw. It went as follows:
(the connection to your comment is that even if P=NP, we wouldn’t actually use the solution to the above problem in practical applications. We’d try our best to find something better)
Which raises a question I have never considered …
Is there something similar to the halting-problem theorem for this kind of computation?
That is, can we classify programs as either productive or non-productive, and then prove the non-existence of a program that can classify?
That’s a great question. I haven’t found anything on a brief search, but it seems like we can fairly readily embed a normal program inside a coinductive-style one and have it be productive after the normal program terminates.