The gist of it is, the halting problem lets us prove that, for a specific counter example, there can not exist any proof that it halts or not. A proof that it does or does not halt, causes a paradox.
But if it’s true that there doesn’t exist a proof that it halts, then it will run forever searching for one. Therefore I’ve proved that the program will not halt. Therefore a proof that it doesn’t halt does exist (this one), and it will eventually find it. Creating a paradox.
Just calling the problem undecidable doesn’t actually solve anything. If you can prove it’s undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can’t know either.
Now here is the weird and confusing part. If the above is a valid proof, then H will eventually find it. It searches all proofs, remember?
Fortunately, H will never find your argument because it is not a correct proof. You rely on hidden assumptions of the following form (given informally and symbolically):
If φ is provable, then φ holds.
Provable(#φ) → φ
where #φ denotes the Gödel number of the proposition φ.
Statements of these form are generally not provable. This phenomenon is known as Löb’s theorem—featured in Main back in 2008.
You use these invalid assumptions to eliminate the first two options from Either H returns true, or false, or loops forever. For example, if H returns true, then you can infer that “FF halts on input FF” is provable, but that does not contradict FF does not halt on input FF.
It is not that these statements are “not generally valid”, but that they are not included within the axiom system used by H. If we attempt to include them, there will be a new statement of the same kind which is not included.
Obviously such statements will be true if H’s axiom system is true, and in that sense they are always valid.
It is not that these statements are “not generally valid”
The intended meaning of valid in my post is “valid step in a proof” in the given formal system. I reworded the offending section.
Obviously such statements will be true if H’s axiom system is true, and in that sense they are always valid.
Yes, and one also has to be careful with the use of the word “true”. There are models in which the axioms are true, but which contain counterexamples to Provable(#φ) → φ.
Therefore a proof that it doesn’t halt does exist (this one), and it will eventually find it.
Gödel’s incompleteness bites here. What theory is your halt-testing machine H searching for a proof within? H can only find termination proofs that are derivable from the axioms of that theory. What theory is your proof that H(FL,FL) does not terminate expressed in? I believe it will necessarily not be the same one.
To expand on this - check_if_proof_proves_x_halts will be working using a certain set of axioms and derivation rules. When you prove that H(FL, FL) doesn’t halt, you also use the assumption that check_if_proof_proves_x_halts will definitely return the true answer, which is an assumption that check_if_proof_proves_x_halts doesn’t have as an axiom and can’t prove. (And the same for ..._x_doesnt_halt.) So H can’t use your proof. When it calls check_if_proof_proves_x_doesnt_halt on your proof, that function returns “false” because your proof uses an axiom that that function doesn’t believe in.
I’m not super confident about this stuff, but I think this is broadly what’s going on.
Just calling the problem undecidable doesn’t actually solve anything. If you can prove it’s undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can’t know either.
I guess the answer to this point is that when constructing the proof that H(FL, FL) loops forever, we assume that H can’t be wrong. So we are working in an extended set of axioms: the program enumerates proofs given some set of axioms T, and the English-language proof in the tumblr post uses the axiom system T + “T is never wrong” (we can write this T+CON(T) for short).
Now, this is not necessarily a problem. If you have reason to think that T is consistent, then most likely T+CON(T) is consistent also (except in weird cases). So if we had some reason to adopt T in the first place, then working in T+CON(T) is also a reasonable choice. (Note that this is different from working in a system which can prove its own consistency, which would be bad. The difference is that in T+CON(T), there is no way to prove that proofs using the additional “T is never wrong” axiom are correct).
More generally, the lesson of Gödel’s incompleteness theorem is that it does not make sense to say that something is “provable” without specifying which proof system you are using, because there are no natural choice for an “ideal” system, they are all flawed. The tumblr post seems paradoxical because it implicitly shifts between two different axiom sets. In particular, it says
If there is no way for H to prove whether it halts or not, then we can’t prove it either.
but a correct statement is, we can’t prove it either using the same set of axioms as H used. We have to use some addtional ones.
EDIT: Okay, now I read your article, and I see that what I wrote here is irrelevant. Leaving it here anyway, maybe someone else will like it.
I admit I didn’t read your linked article thoroughly, but I will try to explain the basic theory and terminology.
In computer science, when we are talking about a computation that for any input always ends in a finite time and provides an answer, we call such computation algorithm, and the mathematical function computed by this algorithm is called “total recursive function” (or “computable function”).
When we are talking about a computation that could potentially run forever, and such case of running forever is treated as an implied special value, we call such computation program, and the mathematical function computed by this algorithm is called “partial recursive function”—a mathematical function with an undefined value for inputs where the program runs forever. (Algorithms are a subset of programs, and total recursive functions are a subset of partial recursive functions.)
Specifically, if we care about computations returning “yes” or “no”, an algorithm corresponds to a total recursive function from X to { “yes”, “no” }, where “X” is the input domain: strings, numbers, whatever your computation accepts as a valid input. If we care about computations that potentially run forever, such program corresponds to a partial recursive function from X to { “yes” }, where not returning value is an implied “no”. (You could have other, equivalent definitions, but that would only make things less elegant mathematically.)
When debating things such as Halting Problem we have to pay extra big attention to the proper distinction between algorithms and programs, or respectively between total and partial recursive functions. When speaking about programs / partial recursive functions we have to pay attention to which answer is the implied one in case of the infinite computation (is “yes” explicit and “no” implied, or is “no” explicit and “yes” implied?). A simple mistake can change everything dramatically. For example:
Q1: “Can we make an algorithm which, given an arbitrary program and an arbitrary input for that program, determines whether the program will stop in finite time?”
A1: “No. See the standard debate about Halting Problem.”
Q2: “Can we make a program which, given an arbitrary program and an arbitrary input for that program, determines whether the program will stop in finite time?”
A2: “Yes, trivially. Just simulate the given program, and print “yes” if the simulation stops.”
Q3: “Can we make a program which, given an arbitrary program and an arbitrary input for that program, determines whether the program will run forever?”
A3: “No. If such program would be possible, then together (running in parallel) with the previous program they would make an algorithm solving the Halting Problem, which we already know is mathematically impossible.”
I believe that if you will be very careful about which of these words you use, the confusion will be solved. In my opinion, most likely at some place you proved that a program exists for something, but then you started treating it as an algorithm.
But if it’s true that there doesn’t exist a proof that it halts, then it will run forever searching for one.
No; provable and true are not the same thing. It may be the case that the program halts, but it is nevertheless impossible to prove that it halts except by “run it and see”, which doesn’t count.
The proofs in question say it is impossible to algorithmically arrive at a Yes-or-No answer to the generalized question, “Does algorithm X have non-trivial property Y for input Z?”, for any specific property Y. Poorly written proof at the bottom. It critically doesn’t say that you can’t have a Yes-or-No-or-Undecidable answer to that question. Introduce an uncertainty factor and the issue falls apart. Thus, you -can- write a Halting Oracle, so long as you’re satisfied that sometimes it won’t be able to say.
For purposes of AI Friendliness—which is certainly a non-trivial property—Rice’s Theorem says we can’t algorithmically prove that an AI—which is an algorithm—is, or is not, friendly, once we manage to define what friendly even is. We can theoretically, however, prove that AI is friendly, not friendly, or undecidably friendly. For our purposes, “not friendly” and “undecidably friendly” are probably equivalent.
For purposes of everything else, nothing at all says that it’s impossible to prove whether or not a specific program will halt given a specific input. “But what about the program in Turing’s diagonalization proof? Does it halt or not?” The input in that case (the Halting Oracle) can’t exist in the first place, as the proof demonstrates.
“But what about this neat program I wrote that halts 25% of the time?” What did you use to generate a random number? Not being aware of your hidden inputs isn’t the same as not having an input. For a given set of inputs, your algorithm halts 100%, or 0%, of the time.
A Halting Oracle that only says “Yes”, “No”, or “Provably Undecidable” hasn’t (to my knowledge and research) been proven to be impossible—and a “Yes”, “No”, or “Maybe” Halting Oracle is quite trivial, as you can throw all cases you can’t figure out an algorithm for into the “Maybe” pile. The proofs do not demonstrate that there are any algorithms which are nondeterministically halting for a given input, nor do they demonstrate that we can’t prove that any given algorithm will or will not halt, nor do they even demonstrate that an algorithm can’t prove that other algorithms will or will not halt—they demonstrate one thing and one thing only, and that is that there is no single algorithm which gives a “Yes” or “No” answer will work on all algorithms for all inputs.
An equivalent-to-Rice’s-Theorem-for-our-very-limited-purposes-proof written like the halting problem proof. A trivial property in this case means it -can- vary by algorithm and input.
Suppose Algorithm Zed determines whether or not Algorithm X has non-trivial property Y for the input Z, returning a boolean value. Take Algorithm ScrewZed, with an input of AlgorithmZed, which has the non-trivial property Y iff its evaluation of AlgorithmZed, given the input of Algorithm ScrewZed with the input of AlgorithmZed, says it doesn’t have non-trivial property Y. Paradox ensues, provided I wrote all this correctly.
A Halting Oracle that only says “Yes”, “No”, or “Provably Undecidable” hasn’t (to my knowledge and research) been proven to be impossible
I think I just proved this. If you can prove something is undecidable, it creates a paradox.
nor do they demonstrate that we can’t prove that any given algorithm will or will not halt
If you could prove any algorithm will halt or not halt, then you can easily make a halt-detection machine that works in all cases. There are some programs which do not halt, but which it’s impossible to prove they will not halt.
nor do they even demonstrate that an algorithm can’t prove that other algorithms will or will not halt
“If you could prove any algorithm will halt or not halt, then you can easily make a halt-detection machine that works in all cases.”
That is only true if your proofs work from the same one set of axioms. But in reality you can choose different sets of axioms as needed. So it may be possible to prove of each and every algorithm that it will halt or not, but you cannot make a halt-detection machine that works in all cases, because your proofs use different sets of axioms.
Technically you can’t. You can prove it’s undecidable as long as your axiom system is consistent. However, it’s impossible for a consistent axiom system to prove its own consistency.
The problem is that no consistent system can prove it’s own consistency. (Of course, an inconsistent system can prove all statements, including both its own consistency and inconsistency.)
Consider a system S. You can add the axiom “S is consistent”, but now you have a new system that still doesn’t know that it’s consistent.
On the other hand, one can add the axiom “S is consistent even with this axiom added”. Your new system is now inconsistent for more or less the reason you used in formulating the above paradox.
I’m not sure that my paradox even requires the proof system to prove it’s own consistency.
But regardless, even if removing that does resolve the paradox, it’s not a very satisfying resolution. Of course a crippled logic can’t prove interesting things about Turing machines.
I’m not sure that my paradox even requires the proof system to prove it’s own consistency.
Your argument requires the proof system to prove it’s own consistency. As we discussed before, your argument relies on the assumption that the implication
If "φ is provable" then "φ"
Provable(#φ) → φ
is available for all φ. If this were the case, your theory would prove itself consistent. Why? Because you could take the contrapositive
If "φ is false" then "φ is not provable"
¬φ → ¬Provable(#φ)
and substitute “0=1” for φ. This gives you
if "0≠1" then "0=1 is not provable"
¬0=1 → ¬Provable(#(0=1))
The premise “0≠1” holds. Therefore, the consequence “0=1 is not provable” also holds. At this point your theory is asserting its own consistency: everything is provable in an inconsistent theory.
0 is not equal to 1, so it’s not inconsistent. I don’t understand what you are trying to say with that.
It would be really silly for a system not to believe it was consistent. And, if it were true, it would also apply to the mathematicians making such statements. The mathematicians are assuming it’s true, and it is obviously true, so I don’t see why a proof system should not have it.
In any case I don’t see how my system requires proving “x is provable implies x”. It searches through proofs in a totally unspecified proof system. It then proves the standard halting problem on a copy of itself, and shows that it will never halt. It then returns false, causing a paradox.
Are saying that it’s impossible to prove the halting problem?
everything is provable in an inconsistent theory
So if something is not provable in a theory, that proves it is consistent?
I did read your link but I don’t understand most of it.
TezlaKoil doesn’t include his whole argument here. Basically he is using Gödel’s second incompleteness theorem. The theorem proves that a theory sufficiently complex to express arithmetic cannot have a proof of the statement corresponding to “this theory is consistent” without being an inconsistent theory.
This doesn’t show that arithmetic has a proof of “this theory is inconsistent” either. If it does, then arithmetic is in fact inconsistent. Since we think arithmetic is consistent, we think that the arithmetical formula corresponding to “arithmetic is consistent” is true but undecidable from within arithmetic.
It also doesn’t imply that the theory composed of arithmetic plus “arithmetic is consistent” is inconsistent, because this theory is more complicated than arithmetic and does not assert its own consistency.
Of course we think the more complicated theory is true and consistent as well, but adding that would just lead to yet another theory, and so on.
If you try to use mathematical induction to form a theory that includes all such statements, that theory will have an infinite number of postulates and will not be able to be analyzed by a Turing machine.
If you try to use mathematical induction to form a theory that includes all such statements, that theory will have an infinite number of postulates and will not be able to be analyzed by a Turing machine.
This part is not quite accurate. Actually, the commonly used theories of arithmetic (and sets) have infinitely many axioms. The actually problem with your approach above is that the theory still won’t be able to prove its own consistency since any proof can only use finitely many of the axioms. One can of course add an additional axiom and keep going using transfinite induction, but now one will finally run into a theory that a Turing machine can’t analyze.
I would agree that it’s not very satisfying, in the sense that there is no satisfying resolution to the paradox of the liar.
But logic is actually crippled in reality in that particular way. You cannot assume either that “this is false” is true, or that it is false, without arriving at a contradiction.
I’m not sure that my paradox even requires the proof system to prove it’s own consistency.
If the system is inconsistent, your program will halt on all inputs with output dependent on whether it happens to find a proof of “this program halts” or “this program doesn’t halt” first.
Of course a crippled logic can’t prove interesting things about Turing machines.
Well, mathematicians have been proving interesting things about Turing machines for the past century despite these limitations.
That means that we can’t actually prove that a proof doesn’t exist, or it creates a paradox. But we did prove it! And the reasoning is sound! Either H returns true, or false, or loops forever. The first two options can’t be true, on pain of paradox. Leaving only the last possibility. But if we can prove that, so can H. And that itself creates a paradox.
H proves that it can’t decide the question one way or the other. The assumption that H can only return TRUE or FALSE is flawed: if a proof exists that something is undecidable, then H would need to be able to return “undecidable”.
This example seems to verify the halting problem: you came up with an algorithm that tries to decide whether a program halts, and then came up with an input for which the algorithm can’t decide one way or another.
H proves that it can’t decide the question one way or the other.
H is literally defined as either returning true or false. Or it can run forever, if it can’t find a proof. It’s possible to create another program which does return “UNDECIDABLE” sometimes. But that is not H.
The point is that the behavior of H is paradoxical. We can prove that it can’t return true or false without contradiction. But if that’s provable, that also creates a contradiction, since H can prove it to.
Not only can H not decide, but we can’t decide whether or not H will decide. Because we aren’t outside the system, and the same logic applies to us.
The point is that the behavior of H is paradoxical. We can prove that it can’t return true or false without contradiction. But if that’s provable, that also creates a contradiction, since H can prove it to.
More precisely, H will encounter a proof that the question is undecidable. It then runs into the following two if statements:
if check_if_proof_proves_x_halts(proof, x, i)
if check_if_proof_proves_x_doesnt_halt(proof, x, i)
Both return “false”, so H moves into the next iteration of the while loop. H will generate undecidability proofs, but as implemented it will merely discard them and continue searching. Since such proofs do not cause H to halt, and since there are no proofs that the program halts or does not, then H will run forever.
As people have been saying, if H can make this argument it is inconsistent and does not work properly (i.e. it does not return True or False in the correct situations.)
It does not give a counterexample. It specifies a way that you could find a counterexample if there was a halting oracle. But if there was a halting oracle there wouldn’t be any counterexample. So what is found is a contradiction.
The standard halting problem proof doesn’t specify what the halting oracle is. It just shows how to construct a counter example for any halting oracle. I actually specified a halting oracle; a program which searches through all possible proofs until it finds a proof that it halts or not.
Then running it on the counterexample causes it to run forever. Therefore I’ve proved that it will run forever. The program will eventually find that proof, return false, and halt.
{All possible proofs} has infinitely many elements longer than zero, so your algorithm will (might) run forever on some programs that do halt, so it is not a halting oracle.
There is no program such that no Turing machine can determine whether it halts or not. But no Turing machine can take every program and determine whether or not each of them halts.
It isn’t actually clear to me that you a Turing machine in the relevant sense, since there is no context where you would run forever without halting, and there are contexts where you will output inconsistent results.
But even if you are, it simply means that there is something undecidable to you—the examples you find will be about other Turing machines, not yourself. There is nothing impossible about that, because you don’t and can’t understand your own source code sufficiently well.
The program I specified is impossible to prove will halt. It doesn’t matter what Turing machine, or human, is searching for the proof. It can never be found. It can’t exist.
The paradox is that I can prove that. Which means I can prove the program searching for proofs will never halt. Which I just proved is impossible.
I looked at your specified program. The case there is basically the same as the situation I mentioned, where I say “you are going to think this is false.” There is no way for you to have a true opinion about that, but there is a way for other people to have a true opinion about it.
In the same way, you haven’t proved that no one and nothing can prove that the program will not halt. You simply prove that there is no proof in the particular language and axioms used by your program. When you proved that program will not halt, you were using a different language and axioms. In the same way, you can’t get that statement right (“you will think this is false”) because it behaves as a Filthy Liar relative to you. But it doesn’t behave that way relative to other people, so they can get it right.
But if it’s true that there doesn’t exist a proof that it halts, then it will run forever searching for one.
Not remotely! There’s no proof that it halts, and there’s no proof that it doesn’t halt. It will run until it halts or the universe ends—there is no forever. The key is that there can be programs for which nobody can tell which one they are without actually trying them until they halt or the universe ends.
“It’s the halting problem all the way down”, doesn’t resolve the paradox, but does express the issue nicely.
Do you not agree with the sentence you quoted? That if a proof of haltiness doesn’t exist, it will search forever for one? And not halt? Because that trivially follows from the definition of the program. It searches proofs forever, until it finds one.
Nope, it also can’t be proven that it’ll search forever: it might halt a few billion years (or a few hundred ms) in. There’s no period of time of searching after which you can say “it’ll continue to run forever”,as it might halt while you’re saying it, which is embarrassing.
I am referring to the program H which I formally specified in the link I posted. H is a specific program which tries to determine if another program will halt.
I then show how to create a counter example for H. And show that if H returns either true or false, it creates a contradiction. Therefore it can’t ever return true or false.
Therefore I’ve proved it will run forever. And this is just the standard proof of the halting problem. The weird part is that proving this also creates a contradiction.
I’m very confused about something related to the Halting Problem. I discussed this on the IRC with some people, but I couldn’t get across what I meant very well. So I wrote up something a bit longer and a bit more formal.
The gist of it is, the halting problem lets us prove that, for a specific counter example, there can not exist any proof that it halts or not. A proof that it does or does not halt, causes a paradox.
But if it’s true that there doesn’t exist a proof that it halts, then it will run forever searching for one. Therefore I’ve proved that the program will not halt. Therefore a proof that it doesn’t halt does exist (this one), and it will eventually find it. Creating a paradox.
Just calling the problem undecidable doesn’t actually solve anything. If you can prove it’s undecidable, it creates the same paradox. If no Turing machine can know whether or not a program halts, and we are also Turing machines, then we can’t know either.
Fortunately, H will never find your argument because it is not a correct proof. You rely on hidden assumptions of the following form (given informally and symbolically):
where #φ denotes the Gödel number of the proposition φ.
Statements of these form are generally not provable. This phenomenon is known as Löb’s theorem—featured in Main back in 2008.
You use these invalid assumptions to eliminate the first two options from Either H returns true, or false, or loops forever. For example, if H returns true, then you can infer that “FF halts on input FF” is provable, but that does not contradict FF does not halt on input FF.
I’m very confused. Of course if φ is provable then it’s true. That’s the whole point of using proofs.
But that statement isn’t provable.
Then just assume it as an axiom.
Then the paradox you were describing fires and the system becomes inconsistent.
Yes, but it may be true without being provable.
It is not that these statements are “not generally valid”, but that they are not included within the axiom system used by H. If we attempt to include them, there will be a new statement of the same kind which is not included.
Obviously such statements will be true if H’s axiom system is true, and in that sense they are always valid.
The intended meaning of valid in my post is “valid step in a proof” in the given formal system. I reworded the offending section.
Yes, and one also has to be careful with the use of the word “true”. There are models in which the axioms are true, but which contain counterexamples to
Provable(#φ) → φ
.Gödel’s incompleteness bites here. What theory is your halt-testing machine H searching for a proof within? H can only find termination proofs that are derivable from the axioms of that theory. What theory is your proof that H(FL,FL) does not terminate expressed in? I believe it will necessarily not be the same one.
To expand on this -
check_if_proof_proves_x_halts
will be working using a certain set of axioms and derivation rules. When you prove thatH(FL, FL)
doesn’t halt, you also use the assumption thatcheck_if_proof_proves_x_halts
will definitely return the true answer, which is an assumption thatcheck_if_proof_proves_x_halts
doesn’t have as an axiom and can’t prove. (And the same for..._x_doesnt_halt
.) SoH
can’t use your proof. When it callscheck_if_proof_proves_x_doesnt_halt
on your proof, that function returns “false” because your proof uses an axiom that that function doesn’t believe in.I’m not super confident about this stuff, but I think this is broadly what’s going on.
I guess the answer to this point is that when constructing the proof that H(FL, FL) loops forever, we assume that H can’t be wrong. So we are working in an extended set of axioms: the program enumerates proofs given some set of axioms T, and the English-language proof in the tumblr post uses the axiom system T + “T is never wrong” (we can write this T+CON(T) for short).
Now, this is not necessarily a problem. If you have reason to think that T is consistent, then most likely T+CON(T) is consistent also (except in weird cases). So if we had some reason to adopt T in the first place, then working in T+CON(T) is also a reasonable choice. (Note that this is different from working in a system which can prove its own consistency, which would be bad. The difference is that in T+CON(T), there is no way to prove that proofs using the additional “T is never wrong” axiom are correct).
More generally, the lesson of Gödel’s incompleteness theorem is that it does not make sense to say that something is “provable” without specifying which proof system you are using, because there are no natural choice for an “ideal” system, they are all flawed. The tumblr post seems paradoxical because it implicitly shifts between two different axiom sets. In particular, it says
but a correct statement is, we can’t prove it either using the same set of axioms as H used. We have to use some addtional ones.
EDIT: Okay, now I read your article, and I see that what I wrote here is irrelevant. Leaving it here anyway, maybe someone else will like it.
I admit I didn’t read your linked article thoroughly, but I will try to explain the basic theory and terminology.
In computer science, when we are talking about a computation that for any input always ends in a finite time and provides an answer, we call such computation algorithm, and the mathematical function computed by this algorithm is called “total recursive function” (or “computable function”).
When we are talking about a computation that could potentially run forever, and such case of running forever is treated as an implied special value, we call such computation program, and the mathematical function computed by this algorithm is called “partial recursive function”—a mathematical function with an undefined value for inputs where the program runs forever. (Algorithms are a subset of programs, and total recursive functions are a subset of partial recursive functions.)
Specifically, if we care about computations returning “yes” or “no”, an algorithm corresponds to a total recursive function from X to { “yes”, “no” }, where “X” is the input domain: strings, numbers, whatever your computation accepts as a valid input. If we care about computations that potentially run forever, such program corresponds to a partial recursive function from X to { “yes” }, where not returning value is an implied “no”. (You could have other, equivalent definitions, but that would only make things less elegant mathematically.)
When debating things such as Halting Problem we have to pay extra big attention to the proper distinction between algorithms and programs, or respectively between total and partial recursive functions. When speaking about programs / partial recursive functions we have to pay attention to which answer is the implied one in case of the infinite computation (is “yes” explicit and “no” implied, or is “no” explicit and “yes” implied?). A simple mistake can change everything dramatically. For example:
Q1: “Can we make an algorithm which, given an arbitrary program and an arbitrary input for that program, determines whether the program will stop in finite time?”
A1: “No. See the standard debate about Halting Problem.”
Q2: “Can we make a program which, given an arbitrary program and an arbitrary input for that program, determines whether the program will stop in finite time?”
A2: “Yes, trivially. Just simulate the given program, and print “yes” if the simulation stops.”
Q3: “Can we make a program which, given an arbitrary program and an arbitrary input for that program, determines whether the program will run forever?”
A3: “No. If such program would be possible, then together (running in parallel) with the previous program they would make an algorithm solving the Halting Problem, which we already know is mathematically impossible.”
I believe that if you will be very careful about which of these words you use, the confusion will be solved. In my opinion, most likely at some place you proved that a program exists for something, but then you started treating it as an algorithm.
Also, there is definitely some objective fact where you cannot get the right answer:
“After thinking about it, you will decide that this statement is false, and you will not change your mind.”
If you conclude that this is false, then the statement will be true. No paradox, but you are wrong.
If you conclude that this is true, then the statement will be false. No paradox, but you are wrong.
If you make no conclusion, or continuously change your mind, then the statement will be false. No paradox, but the statement is undecidable to you.
No; provable and true are not the same thing. It may be the case that the program halts, but it is nevertheless impossible to prove that it halts except by “run it and see”, which doesn’t count.
Okay, an attempt to clear up the Halting Problem:
The proofs in question say it is impossible to algorithmically arrive at a Yes-or-No answer to the generalized question, “Does algorithm X have non-trivial property Y for input Z?”, for any specific property Y. Poorly written proof at the bottom. It critically doesn’t say that you can’t have a Yes-or-No-or-Undecidable answer to that question. Introduce an uncertainty factor and the issue falls apart. Thus, you -can- write a Halting Oracle, so long as you’re satisfied that sometimes it won’t be able to say.
For purposes of AI Friendliness—which is certainly a non-trivial property—Rice’s Theorem says we can’t algorithmically prove that an AI—which is an algorithm—is, or is not, friendly, once we manage to define what friendly even is. We can theoretically, however, prove that AI is friendly, not friendly, or undecidably friendly. For our purposes, “not friendly” and “undecidably friendly” are probably equivalent.
For purposes of everything else, nothing at all says that it’s impossible to prove whether or not a specific program will halt given a specific input. “But what about the program in Turing’s diagonalization proof? Does it halt or not?” The input in that case (the Halting Oracle) can’t exist in the first place, as the proof demonstrates.
“But what about this neat program I wrote that halts 25% of the time?” What did you use to generate a random number? Not being aware of your hidden inputs isn’t the same as not having an input. For a given set of inputs, your algorithm halts 100%, or 0%, of the time.
A Halting Oracle that only says “Yes”, “No”, or “Provably Undecidable” hasn’t (to my knowledge and research) been proven to be impossible—and a “Yes”, “No”, or “Maybe” Halting Oracle is quite trivial, as you can throw all cases you can’t figure out an algorithm for into the “Maybe” pile. The proofs do not demonstrate that there are any algorithms which are nondeterministically halting for a given input, nor do they demonstrate that we can’t prove that any given algorithm will or will not halt, nor do they even demonstrate that an algorithm can’t prove that other algorithms will or will not halt—they demonstrate one thing and one thing only, and that is that there is no single algorithm which gives a “Yes” or “No” answer will work on all algorithms for all inputs.
An equivalent-to-Rice’s-Theorem-for-our-very-limited-purposes-proof written like the halting problem proof. A trivial property in this case means it -can- vary by algorithm and input.
Suppose Algorithm Zed determines whether or not Algorithm X has non-trivial property Y for the input Z, returning a boolean value. Take Algorithm ScrewZed, with an input of AlgorithmZed, which has the non-trivial property Y iff its evaluation of AlgorithmZed, given the input of Algorithm ScrewZed with the input of AlgorithmZed, says it doesn’t have non-trivial property Y. Paradox ensues, provided I wrote all this correctly.
I think I just proved this. If you can prove something is undecidable, it creates a paradox.
If you could prove any algorithm will halt or not halt, then you can easily make a halt-detection machine that works in all cases. There are some programs which do not halt, but which it’s impossible to prove they will not halt.
But not for all cases, see above.
“If you could prove any algorithm will halt or not halt, then you can easily make a halt-detection machine that works in all cases.”
That is only true if your proofs work from the same one set of axioms. But in reality you can choose different sets of axioms as needed. So it may be possible to prove of each and every algorithm that it will halt or not, but you cannot make a halt-detection machine that works in all cases, because your proofs use different sets of axioms.
What makes you say this?
If I can prove that the problem is undecidable, so can
H
.H
searches through all possible proofs, which must contain that proof too.If a problem is undecidable, that means no proof exists either way. Otherwise it would be decidable, in principle.
If no proof exists either way, and
H
searches through all possible proofs, then it will not halt. It will keep searching forever.Therefore, if you can prove that it is undecidable, then you can prove that
H
will not halt. AndH
can prove this too.So
H
has proved that it will not halt, and returnsfalse
. This causes a paradox.Technically you can’t. You can prove it’s undecidable as long as your axiom system is consistent. However, it’s impossible for a consistent axiom system to prove its own consistency.
Perhaps. If so just make it an axiom. It’s silly to use a system that doesn’t believe it’s consistent.
The problem is that no consistent system can prove it’s own consistency. (Of course, an inconsistent system can prove all statements, including both its own consistency and inconsistency.)
Consider a system S. You can add the axiom “S is consistent”, but now you have a new system that still doesn’t know that it’s consistent.
On the other hand, one can add the axiom “S is consistent even with this axiom added”. Your new system is now inconsistent for more or less the reason you used in formulating the above paradox.
I’m not sure that my paradox even requires the proof system to prove it’s own consistency.
But regardless, even if removing that does resolve the paradox, it’s not a very satisfying resolution. Of course a crippled logic can’t prove interesting things about Turing machines.
Your argument requires the proof system to prove it’s own consistency. As we discussed before, your argument relies on the assumption that the implication
is available for all φ. If this were the case, your theory would prove itself consistent. Why? Because you could take the contrapositive
and substitute “0=1” for φ. This gives you
The premise “0≠1” holds. Therefore, the consequence “0=1 is not provable” also holds. At this point your theory is asserting its own consistency: everything is provable in an inconsistent theory.
You might enjoy reading about the Turing Machine proof of Gödel’s first incompleteness theorem, which is closely related to your paradox.
0 is not equal to 1, so it’s not inconsistent. I don’t understand what you are trying to say with that.
It would be really silly for a system not to believe it was consistent. And, if it were true, it would also apply to the mathematicians making such statements. The mathematicians are assuming it’s true, and it is obviously true, so I don’t see why a proof system should not have it.
In any case I don’t see how my system requires proving “x is provable implies x”. It searches through proofs in a totally unspecified proof system. It then proves the standard halting problem on a copy of itself, and shows that it will never halt. It then returns false, causing a paradox.
Are saying that it’s impossible to prove the halting problem?
So if something is not provable in a theory, that proves it is consistent?
I did read your link but I don’t understand most of it.
TezlaKoil doesn’t include his whole argument here. Basically he is using Gödel’s second incompleteness theorem. The theorem proves that a theory sufficiently complex to express arithmetic cannot have a proof of the statement corresponding to “this theory is consistent” without being an inconsistent theory.
This doesn’t show that arithmetic has a proof of “this theory is inconsistent” either. If it does, then arithmetic is in fact inconsistent. Since we think arithmetic is consistent, we think that the arithmetical formula corresponding to “arithmetic is consistent” is true but undecidable from within arithmetic.
It also doesn’t imply that the theory composed of arithmetic plus “arithmetic is consistent” is inconsistent, because this theory is more complicated than arithmetic and does not assert its own consistency.
Of course we think the more complicated theory is true and consistent as well, but adding that would just lead to yet another theory, and so on.
If you try to use mathematical induction to form a theory that includes all such statements, that theory will have an infinite number of postulates and will not be able to be analyzed by a Turing machine.
This part is not quite accurate. Actually, the commonly used theories of arithmetic (and sets) have infinitely many axioms. The actually problem with your approach above is that the theory still won’t be able to prove its own consistency since any proof can only use finitely many of the axioms. One can of course add an additional axiom and keep going using transfinite induction, but now one will finally run into a theory that a Turing machine can’t analyze.
I would agree that it’s not very satisfying, in the sense that there is no satisfying resolution to the paradox of the liar.
But logic is actually crippled in reality in that particular way. You cannot assume either that “this is false” is true, or that it is false, without arriving at a contradiction.
If the system is inconsistent, your program will halt on all inputs with output dependent on whether it happens to find a proof of “this program halts” or “this program doesn’t halt” first.
Well, mathematicians have been proving interesting things about Turing machines for the past century despite these limitations.
From the link:
H proves that it can’t decide the question one way or the other. The assumption that H can only return TRUE or FALSE is flawed: if a proof exists that something is undecidable, then H would need to be able to return “undecidable”.
This example seems to verify the halting problem: you came up with an algorithm that tries to decide whether a program halts, and then came up with an input for which the algorithm can’t decide one way or another.
H
is literally defined as either returning true or false. Or it can run forever, if it can’t find a proof. It’s possible to create another program which does return “UNDECIDABLE” sometimes. But that is notH
.The point is that the behavior of
H
is paradoxical. We can prove that it can’t returntrue
orfalse
without contradiction. But if that’s provable, that also creates a contradiction, sinceH
can prove it to.Not only can
H
not decide, but we can’t decide whether or notH
will decide. Because we aren’t outside the system, and the same logic applies to us.I did overlook the definition of H. Apologies.
More precisely, H will encounter a proof that the question is undecidable. It then runs into the following two if statements:
if check_if_proof_proves_x_halts(proof, x, i)
if check_if_proof_proves_x_doesnt_halt(proof, x, i)
Both return “false”, so H moves into the next iteration of the while loop. H will generate undecidability proofs, but as implemented it will merely discard them and continue searching. Since such proofs do not cause H to halt, and since there are no proofs that the program halts or does not, then H will run forever.
If it is undecidable, then that means no proof exists (that
H
will or will not halt.)If no proof exists, then
H
will loop forever searching for one.Therefore undecidability implies
H
will run forever. You’ve just proved this.Therefore a proof exists that
H
will run forever (that one), andH
will eventually find it.Paradox...
As people have been saying, if H can make this argument it is inconsistent and does not work properly (i.e. it does not return True or False in the correct situations.)
It does not give a counterexample. It specifies a way that you could find a counterexample if there was a halting oracle. But if there was a halting oracle there wouldn’t be any counterexample. So what is found is a contradiction.
The standard halting problem proof doesn’t specify what the halting oracle is. It just shows how to construct a counter example for any halting oracle. I actually specified a halting oracle; a program which searches through all possible proofs until it finds a proof that it halts or not.
Then running it on the counterexample causes it to run forever. Therefore I’ve proved that it will run forever. The program will eventually find that proof, return false, and halt.
{All possible proofs} has infinitely many elements longer than zero, so your algorithm will (might) run forever on some programs that do halt, so it is not a halting oracle.
If a program halts, it’s easy to prove that it halts. Just run it until it halts. The problem is proving that some programs won’t halt.
There is no program such that no Turing machine can determine whether it halts or not. But no Turing machine can take every program and determine whether or not each of them halts.
It isn’t actually clear to me that you a Turing machine in the relevant sense, since there is no context where you would run forever without halting, and there are contexts where you will output inconsistent results.
But even if you are, it simply means that there is something undecidable to you—the examples you find will be about other Turing machines, not yourself. There is nothing impossible about that, because you don’t and can’t understand your own source code sufficiently well.
The program I specified is impossible to prove will halt. It doesn’t matter what Turing machine, or human, is searching for the proof. It can never be found. It can’t exist.
The paradox is that I can prove that. Which means I can prove the program searching for proofs will never halt. Which I just proved is impossible.
I looked at your specified program. The case there is basically the same as the situation I mentioned, where I say “you are going to think this is false.” There is no way for you to have a true opinion about that, but there is a way for other people to have a true opinion about it.
In the same way, you haven’t proved that no one and nothing can prove that the program will not halt. You simply prove that there is no proof in the particular language and axioms used by your program. When you proved that program will not halt, you were using a different language and axioms. In the same way, you can’t get that statement right (“you will think this is false”) because it behaves as a Filthy Liar relative to you. But it doesn’t behave that way relative to other people, so they can get it right.
No, it’s the halting problem all the way down.
Not remotely! There’s no proof that it halts, and there’s no proof that it doesn’t halt. It will run until it halts or the universe ends—there is no forever. The key is that there can be programs for which nobody can tell which one they are without actually trying them until they halt or the universe ends.
The halting problem doesn’t actually imply this.
“It’s the halting problem all the way down”, doesn’t resolve the paradox, but does express the issue nicely.
Do you not agree with the sentence you quoted? That if a proof of haltiness doesn’t exist, it will search forever for one? And not halt? Because that trivially follows from the definition of the program. It searches proofs forever, until it finds one.
Nope, it also can’t be proven that it’ll search forever: it might halt a few billion years (or a few hundred ms) in. There’s no period of time of searching after which you can say “it’ll continue to run forever”,as it might halt while you’re saying it, which is embarrassing.
I am referring to the program
H
which I formally specified in the link I posted. H is a specific program which tries to determine if another program will halt.I then show how to create a counter example for
H
. And show that ifH
returns eithertrue
orfalse
, it creates a contradiction. Therefore it can’t ever returntrue
orfalse
.Therefore I’ve proved it will run forever. And this is just the standard proof of the halting problem. The weird part is that proving this also creates a contradiction.