I’ll write down an algorithm that solves SAT in time N^k if any algorithm does.
On SAT instances of size T, enumerate all algorithms of size up to log(T); for each one, consider all SAT instances of size up to log(T); run each algorithm for time log(T)^k, outputing 0 if they don’t halt, and compare its output to the result of a brute force search.
Take the shortest algorithm which worked in each of these tests, and use it to answer your original SAT query of size T (aborting and outputing 0 if it takes more than T^k time).
This entire process takes time poly(T). Moreover, suppose there is an N^k time algorithm which solves SAT correctly and let A be the shortest such algorithm. Then there is a finite constant T0 such that all shorter algorithms than A fail on some SAT instance of size at most T0. Then the algorithm I described works correctly for all SAT instances of size at least 2^(T0 + |A|).
(Edit: this argument is silly; you can just run all programs of size log(T) and output a solution if any of them find one)
Edit: Note in particular that the algorithm which takes k = BB(10^10) and does a brute force search instead for T < BB(10^10) is guaranteed to solve SAT in poly time, provided that any algorithm does. Realistically, I think taking k = 10 and doing a brute for search for T < 10^10^10 is virtually guaranteed to solve SAT in poly time, if any algorithm does (and I can actually write down this latter algorithm).
Perhaps a slightly simpler way would be to ‘run all algorithms simultaneously’ such that each one is slowed down by a constant factor. (E.g. at time t = (2x + 1) * 2^n, we do step x of algorithm n.) When algorithms terminate, we check (still within the same “process” and hence slowed down by a factor of 2^n) whether a solution to the problem has been generated. If so, we return it and halt.
ETA: Ah, but the business of ‘switching processes’ is going to need more than constant time. So I guess it’s not immediately clear that this works.
Elaborate, please.
I’ll write down an algorithm that solves SAT in time N^k if any algorithm does.
On SAT instances of size T, enumerate all algorithms of size up to log(T); for each one, consider all SAT instances of size up to log(T); run each algorithm for time log(T)^k, outputing 0 if they don’t halt, and compare its output to the result of a brute force search.
Take the shortest algorithm which worked in each of these tests, and use it to answer your original SAT query of size T (aborting and outputing 0 if it takes more than T^k time).
This entire process takes time poly(T). Moreover, suppose there is an N^k time algorithm which solves SAT correctly and let A be the shortest such algorithm. Then there is a finite constant T0 such that all shorter algorithms than A fail on some SAT instance of size at most T0. Then the algorithm I described works correctly for all SAT instances of size at least 2^(T0 + |A|).
(Edit: this argument is silly; you can just run all programs of size log(T) and output a solution if any of them find one)
Edit: Note in particular that the algorithm which takes k = BB(10^10) and does a brute force search instead for T < BB(10^10) is guaranteed to solve SAT in poly time, provided that any algorithm does. Realistically, I think taking k = 10 and doing a brute for search for T < 10^10^10 is virtually guaranteed to solve SAT in poly time, if any algorithm does (and I can actually write down this latter algorithm).
Perhaps a slightly simpler way would be to ‘run all algorithms simultaneously’ such that each one is slowed down by a constant factor. (E.g. at time t = (2x + 1) * 2^n, we do step x of algorithm n.) When algorithms terminate, we check (still within the same “process” and hence slowed down by a factor of 2^n) whether a solution to the problem has been generated. If so, we return it and halt.
ETA: Ah, but the business of ‘switching processes’ is going to need more than constant time. So I guess it’s not immediately clear that this works.
Very interesting.