If ZFC and P vs NP are independent, this means that it’s impossible to give a proof that P=NP with ZFC. Since an example of a polynomial-time algorithm that can solve an NP problem would be such a proof, any such algorithm must be impossible to state in ZFC. Any algorithm we can actually run can be stated in ZFC. As such, this would mean that P!=NP. You’d just have to prove it in ZFC+1, which is ZFC with the additional axiom that ZFC (the original, not ZFC+1) is consistent.
I can write down a polynomial time algorithm which solves SAT (for all but finitely many instances at least), provided that in fact P = NP. The difficulty is proving that this particular algorithm works, which could easily be true but independent of ZFC.
I’ll write down an algorithm that solves SAT in time N^k if any algorithm does.
On SAT instances of size T, enumerate all algorithms of size up to log(T); for each one, consider all SAT instances of size up to log(T); run each algorithm for time log(T)^k, outputing 0 if they don’t halt, and compare its output to the result of a brute force search.
Take the shortest algorithm which worked in each of these tests, and use it to answer your original SAT query of size T (aborting and outputing 0 if it takes more than T^k time).
This entire process takes time poly(T). Moreover, suppose there is an N^k time algorithm which solves SAT correctly and let A be the shortest such algorithm. Then there is a finite constant T0 such that all shorter algorithms than A fail on some SAT instance of size at most T0. Then the algorithm I described works correctly for all SAT instances of size at least 2^(T0 + |A|).
(Edit: this argument is silly; you can just run all programs of size log(T) and output a solution if any of them find one)
Edit: Note in particular that the algorithm which takes k = BB(10^10) and does a brute force search instead for T < BB(10^10) is guaranteed to solve SAT in poly time, provided that any algorithm does. Realistically, I think taking k = 10 and doing a brute for search for T < 10^10^10 is virtually guaranteed to solve SAT in poly time, if any algorithm does (and I can actually write down this latter algorithm).
Perhaps a slightly simpler way would be to ‘run all algorithms simultaneously’ such that each one is slowed down by a constant factor. (E.g. at time t = (2x + 1) * 2^n, we do step x of algorithm n.) When algorithms terminate, we check (still within the same “process” and hence slowed down by a factor of 2^n) whether a solution to the problem has been generated. If so, we return it and halt.
ETA: Ah, but the business of ‘switching processes’ is going to need more than constant time. So I guess it’s not immediately clear that this works.
Exactly. For a more familiar example, the independence of the continuum hypothesis from ZFC means that we’ll never stumble across a set of intermediate cardinality while doing an analysis problem (otherwise, we’d have a disproof), and so the CH is valid in a practical sense even though formally undecidable.
Generally speaking, if a statement is undecidable and claims the existence of something which could be effectively verified if it were found, then for practical purposes you can assume it’s false.
EDIT: Oops- see paulfchristiano’s comment on why P vs. NP isn’t a case of this principle. I think the general principle is still OK, I just forgot about the whole “ability to effectively verify” bit in this case.
See paulfchristiano’s comments. The mistake in DanielLC’s comment is that, even though any polynomial algorithm can be “stated in ZFC”, the proof that it runs in polynomial time doesn’t necessarily lie within ZFC, or ZFC+Con(ZFC), or any other system you can name in advance.
If ZFC and P vs NP are independent, this means that it’s impossible to give a proof that P=NP with ZFC. Since an example of a polynomial-time algorithm that can solve an NP problem would be such a proof, any such algorithm must be impossible to state in ZFC. Any algorithm we can actually run can be stated in ZFC. As such, this would mean that P!=NP. You’d just have to prove it in ZFC+1, which is ZFC with the additional axiom that ZFC (the original, not ZFC+1) is consistent.
I can write down a polynomial time algorithm which solves SAT (for all but finitely many instances at least), provided that in fact P = NP. The difficulty is proving that this particular algorithm works, which could easily be true but independent of ZFC.
Elaborate, please.
I’ll write down an algorithm that solves SAT in time N^k if any algorithm does.
On SAT instances of size T, enumerate all algorithms of size up to log(T); for each one, consider all SAT instances of size up to log(T); run each algorithm for time log(T)^k, outputing 0 if they don’t halt, and compare its output to the result of a brute force search.
Take the shortest algorithm which worked in each of these tests, and use it to answer your original SAT query of size T (aborting and outputing 0 if it takes more than T^k time).
This entire process takes time poly(T). Moreover, suppose there is an N^k time algorithm which solves SAT correctly and let A be the shortest such algorithm. Then there is a finite constant T0 such that all shorter algorithms than A fail on some SAT instance of size at most T0. Then the algorithm I described works correctly for all SAT instances of size at least 2^(T0 + |A|).
(Edit: this argument is silly; you can just run all programs of size log(T) and output a solution if any of them find one)
Edit: Note in particular that the algorithm which takes k = BB(10^10) and does a brute force search instead for T < BB(10^10) is guaranteed to solve SAT in poly time, provided that any algorithm does. Realistically, I think taking k = 10 and doing a brute for search for T < 10^10^10 is virtually guaranteed to solve SAT in poly time, if any algorithm does (and I can actually write down this latter algorithm).
Perhaps a slightly simpler way would be to ‘run all algorithms simultaneously’ such that each one is slowed down by a constant factor. (E.g. at time t = (2x + 1) * 2^n, we do step x of algorithm n.) When algorithms terminate, we check (still within the same “process” and hence slowed down by a factor of 2^n) whether a solution to the problem has been generated. If so, we return it and halt.
ETA: Ah, but the business of ‘switching processes’ is going to need more than constant time. So I guess it’s not immediately clear that this works.
Very interesting.
Exactly. For a more familiar example, the independence of the continuum hypothesis from ZFC means that we’ll never stumble across a set of intermediate cardinality while doing an analysis problem (otherwise, we’d have a disproof), and so the CH is valid in a practical sense even though formally undecidable.
Generally speaking, if a statement is undecidable and claims the existence of something which could be effectively verified if it were found, then for practical purposes you can assume it’s false.
EDIT: Oops- see paulfchristiano’s comment on why P vs. NP isn’t a case of this principle. I think the general principle is still OK, I just forgot about the whole “ability to effectively verify” bit in this case.
See paulfchristiano’s comments. The mistake in DanielLC’s comment is that, even though any polynomial algorithm can be “stated in ZFC”, the proof that it runs in polynomial time doesn’t necessarily lie within ZFC, or ZFC+Con(ZFC), or any other system you can name in advance.
Ah, thanks.