If computing the function is in P, minimization is in NP.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
I also don’t see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn’t found the minimum and therefore should commit suicide?
This seems like a relevant article, although it doesn’t directly address the above questions.
ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.
ETA 2: No, that’s not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don’t know how to do this.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
...That’s an interesting way to look at it. For example take the traveling salesman problem: it’s traditionally converted to a decision problem by asking if there’s a route cheaper than X. Note that only “yes” answers have to have quickly checkable certificates—this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?
In short, I’m a little funny in the head right now, but still think my original claim was correct.
This paper (which I found via this Ph.D. thesis) says that for TSP, the question “Is the optimal value equal to k” is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.
Your original claim was “If computing the function is in P, minimization is in NP.” Technically, this sentence doesn’t make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.
As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is “If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle.” This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.
I can’t read those links, but determining whether an input is a minimum seems to be in co-NP because it allows easy verification of a “no” answer by counterexample. So have those people proved that NP=co-NP, or am I just being dense?
This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Yep, as a programmer I should’ve said “at most polynomially harder than a problem in NP”. You’re right that my wording was bad. I still stand by the spirit, though :-)
You can’t get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you’re going to need O(N) time. Someone mentioned this elsewhere but I can’t find the comment now.
Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you’re satisfied you’ve actually got the minimum. Of course, there’s always the chance there’s a slightly smaller minimum somewhere, so we’re just getting an approximate answer like before, albeit an incredibly good approximate answer.
It’s a classical point—you can replace the question of “What is the minimal value of f(x) (what is the witness x that gives the minimal value)?” by “Is there a parameter x that gives the value f(x) less than C (what is an example of such x)?”, and then use binary search on C to pinpoint the minimum. Since being able to write down the answer in polynomial time must be a given, you can take the ability to run the search on C in polynomial time for granted.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
I also don’t see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn’t found the minimum and therefore should commit suicide?
This seems like a relevant article, although it doesn’t directly address the above questions.
ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.
ETA 2: No, that’s not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don’t know how to do this.
...That’s an interesting way to look at it. For example take the traveling salesman problem: it’s traditionally converted to a decision problem by asking if there’s a route cheaper than X. Note that only “yes” answers have to have quickly checkable certificates—this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?
In short, I’m a little funny in the head right now, but still think my original claim was correct.
This paper (which I found via this Ph.D. thesis) says that for TSP, the question “Is the optimal value equal to k” is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.
Your original claim was “If computing the function is in P, minimization is in NP.” Technically, this sentence doesn’t make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.
As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is “If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle.” This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.
There was a dumb comment here, I deleted it. You’re actually completely right, thanks!
I can’t read those links, but determining whether an input is a minimum seems to be in co-NP because it allows easy verification of a “no” answer by counterexample. So have those people proved that NP=co-NP, or am I just being dense?
Yep, as a programmer I should’ve said “at most polynomially harder than a problem in NP”. You’re right that my wording was bad. I still stand by the spirit, though :-)
I’m a bit confused too, but I found a Ph.D. thesis that answers a bunch of these questions. I’m still reading it.
On page 5, it says that for TSP, the question “Is the optimal value equal to k” is D^P-complete (which is something I haven’t heard of before).
You can’t get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you’re going to need O(N) time. Someone mentioned this elsewhere but I can’t find the comment now.
Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you’re satisfied you’ve actually got the minimum. Of course, there’s always the chance there’s a slightly smaller minimum somewhere, so we’re just getting an approximate answer like before, albeit an incredibly good approximate answer.
Yeah, my comment was pretty stupid. Thanks.
It’s a classical point—you can replace the question of “What is the minimal value of f(x) (what is the witness x that gives the minimal value)?” by “Is there a parameter x that gives the value f(x) less than C (what is an example of such x)?”, and then use binary search on C to pinpoint the minimum. Since being able to write down the answer in polynomial time must be a given, you can take the ability to run the search on C in polynomial time for granted.