With a similar technique you can solve any NP-Complete problem. Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute. You could apply this, for instance, to genetic algorithms, and arrive at the globally fittest solution. You could likewise solve for the “best” AI given some restraints, such as: find the best program less than 10000 characters long that performs best on a Turing test.
If mangled worlds is correct (and I understand it correctly), then sufficiently improbable events fail to happen at all. What kind of limit would this place on the problems you can solve with “quantum suicide voodoo”?
This is a very interesting point and somehow shakes my belief in the current version of MWI.
What I could imagine is that since the total information content of multiverse must be finite, there is some additional quantification going on that makes highly improbable branches “too fuzzy” to be observable. Or something like that.
We have seen in the sister topic that mangled worlds theory can in fact account for such information loss. However MWT has similar deficiencies as single worlds: non local action, nonlinearity, discontinuity. It does not mean it can’t be true.
I would not state this for sure. There could still be quite a difference between astronomically unlikely and superastronomically unlikely.
So for example if the total information content of the multiverse is bounded by 2^1000 bits, you could go down to an insanely small probability of 1/2^(2^1000) but not to the “merely” 1/2^(2^1000) times less probable 1/2^(2^1001) .
Why would the information content of a quantum universe be measured in bits, rather than qubits? 2^1000 qubits is enough to keep track of every possible configuration of the Hubble volume, without discarding any low magnitude ones. (Unless of course QM does discard low magnitude branches, in which case your quantum computer would too… but such a circular definition is consistent with any amount of information content.)
Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute.
Why much harder? If computing the function is in P, minimization is in NP.
EDIT: this statement is wrong, see Wei Dai’s comments below for explanation. Corrected version: “minimization is at most polynomially harder than an NP problem”.
If computing the function is in P, minimization is in NP.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
I also don’t see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn’t found the minimum and therefore should commit suicide?
This seems like a relevant article, although it doesn’t directly address the above questions.
ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.
ETA 2: No, that’s not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don’t know how to do this.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
...That’s an interesting way to look at it. For example take the traveling salesman problem: it’s traditionally converted to a decision problem by asking if there’s a route cheaper than X. Note that only “yes” answers have to have quickly checkable certificates—this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?
In short, I’m a little funny in the head right now, but still think my original claim was correct.
This paper (which I found via this Ph.D. thesis) says that for TSP, the question “Is the optimal value equal to k” is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.
Your original claim was “If computing the function is in P, minimization is in NP.” Technically, this sentence doesn’t make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.
As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is “If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle.” This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.
I can’t read those links, but determining whether an input is a minimum seems to be in co-NP because it allows easy verification of a “no” answer by counterexample. So have those people proved that NP=co-NP, or am I just being dense?
This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Yep, as a programmer I should’ve said “at most polynomially harder than a problem in NP”. You’re right that my wording was bad. I still stand by the spirit, though :-)
You can’t get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you’re going to need O(N) time. Someone mentioned this elsewhere but I can’t find the comment now.
Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you’re satisfied you’ve actually got the minimum. Of course, there’s always the chance there’s a slightly smaller minimum somewhere, so we’re just getting an approximate answer like before, albeit an incredibly good approximate answer.
It’s a classical point—you can replace the question of “What is the minimal value of f(x) (what is the witness x that gives the minimal value)?” by “Is there a parameter x that gives the value f(x) less than C (what is an example of such x)?”, and then use binary search on C to pinpoint the minimum. Since being able to write down the answer in polynomial time must be a given, you can take the ability to run the search on C in polynomial time for granted.
Yes, that’s certainly right, thanks. I was careless in assuming that people would just mentally convert everything to decision problems, like I do :-)
ETA: Vladimir, I couldn’t find any proof that NP is strictly contained in PP; is it known? Our thread seems to depend on that question only. It should be true because PP is so freaky powerful, but nothing’s turned up yet.
Hah. Thanks, that settles the issue for now: short of Scott Aaronson making an unexpected breakthrough, we have no proof that quantum suicide computing can solve any non-polynomial problems :-)
No idea about PP. Where did PP come up, or how does it relate to FNP?
ETA: Ah, I see. Aaronson’s paper here shows that postselection (which is what our suicide voodoo is) gives you PP. Since postselection also gives you FNP, and since FNP is harder than NP, then we should have that PP is strictly harder than NP.
Jordan, I’m really sorry to inject unneeded rigor into the discussion again, but the statement “FNP is strictly harder than NP” doesn’t work. It makes sense to talk about P=NP because P and NP are both sets of decision problems, but FNP is a set of function problems, so to compare it with NP you have to provide a mapping of some sort. Thus your argument doesn’t prove that PP>NP yet.
For me personally, function problems are exotic beasts and I’d prefer to settle the power of quantum voodoo on decision problems first :-)
There’s a natural partial mapping in my mind, but a rigorous mapping is beyond me. Check out this. Apparently it can be shown that FNP is strictly harder than NP, under certain conditions. I didn’t understand the condition, and whether it was reasonable or not.
Regardless of all this complexity jazz, which has definitely exceeded my grasp, my random dictionary example still demonstrates that certain problems can be solved exponentially faster with postselection than without, even if it doesn’t prove that FNP > NP.
ETA: Coming from a numerics background, decision problems seem like exotic beasts. I’d prefer to settle the power of quantum voodoo on function problems first =P
Create a list with 2^N entries, where each entry is a random number from 0 to 1. What is the smallest entry?
This is a function minimization problem, where the function takes n and returns the n-th element of the list. The cost of computing the function is O(1). There is no way to minimize it, however, without looking at every value, which is O(2^N). With quantum suicide voodoo, however, we can minimize it in O(1).
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
Good point. Function calls will be O(log(N)).
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can’t minimize the entire function quickly.
The point is the relative cost difference between function evaluations and function minimization.
Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve “much harder than NP-complete” problems with quantum voodoo. I don’t think you’ve proved it yet, but if you revise it to somehow talk about “relative cost differences”, it does become somewhat more believable.
With a similar technique you can solve any NP-Complete problem. Actually, you can solve much harder problems. For instance, you can minimize any function you have enough computing power to compute. You could apply this, for instance, to genetic algorithms, and arrive at the globally fittest solution. You could likewise solve for the “best” AI given some restraints, such as: find the best program less than 10000 characters long that performs best on a Turing test.
If mangled worlds is correct (and I understand it correctly), then sufficiently improbable events fail to happen at all. What kind of limit would this place on the problems you can solve with “quantum suicide voodoo”?
This is a very interesting point and somehow shakes my belief in the current version of MWI.
What I could imagine is that since the total information content of multiverse must be finite, there is some additional quantification going on that makes highly improbable branches “too fuzzy” to be observable. Or something like that.
Not likely. You’re already in a highly improbable branch, and it’s getting less probable every millisecond.
We have seen in the sister topic that mangled worlds theory can in fact account for such information loss. However MWT has similar deficiencies as single worlds: non local action, nonlinearity, discontinuity. It does not mean it can’t be true.
I would not state this for sure. There could still be quite a difference between astronomically unlikely and superastronomically unlikely.
So for example if the total information content of the multiverse is bounded by 2^1000 bits, you could go down to an insanely small probability of 1/2^(2^1000) but not to the “merely” 1/2^(2^1000) times less probable 1/2^(2^1001) .
Why would the information content of a quantum universe be measured in bits, rather than qubits? 2^1000 qubits is enough to keep track of every possible configuration of the Hubble volume, without discarding any low magnitude ones. (Unless of course QM does discard low magnitude branches, in which case your quantum computer would too… but such a circular definition is consistent with any amount of information content.)
Why much harder? If computing the function is in P, minimization is in NP.
EDIT: this statement is wrong, see Wei Dai’s comments below for explanation. Corrected version: “minimization is at most polynomially harder than an NP problem”.
Why is that? In order for function minimization to be in NP, you have to be to write a polytime-checkable proof of the fact that some input is a minimum. I don’t think that’s true in general.
I also don’t see how function minimization can be accomplished using quantum suicide. You can compute the value of the function on every possible input in parallel, but how do you know that your branch hasn’t found the minimum and therefore should commit suicide?
This seems like a relevant article, although it doesn’t directly address the above questions.
ETA: I do see a solution now how to do function minimization using quantum suicide: Guess a random x, compute f(x), then flip a quantum coin n*f(x) times, and commit suicide unless all of them come out heads. Now the branch that found the minimum f(x) will have a measure at least 2^n times greater than any other branch.
ETA 2: No, that’s not quite right, since flipping a quantum coin n*f(x) times takes time proportional to f(x) which could be exponential. So I still don’t know how to do this.
...That’s an interesting way to look at it. For example take the traveling salesman problem: it’s traditionally converted to a decision problem by asking if there’s a route cheaper than X. Note that only “yes” answers have to have quickly checkable certificates—this is NP, not NP intersected with co-NP. Now if we start asking if X is exactly the minimum instead, would that be equivalent in complexity? And would it be helpful, seeing as it takes away our ability to do binary search?
In short, I’m a little funny in the head right now, but still think my original claim was correct.
This paper (which I found via this Ph.D. thesis) says that for TSP, the question “Is the optimal value equal to k” is D^P-complete. It also says D^P contains both NP and coNP, so assuming NP!=coNP, that problem is not in NP.
Your original claim was “If computing the function is in P, minimization is in NP.” Technically, this sentence doesn’t make sense because minimization is a functional problem, whereas NP is a class of decision problems. But the ability to minimize a function certainly implies the ability to determine whether a value is a minimum, and since that problem is not in NP, I think your claim is wrong in spirit as well.
As Nesov pointed out, TSP is in FP^NP, so perhaps a restatement of your claim that is correct is “If computing the function takes polynomial time, then it can be minimized in polynomial time given an NP oracle.” This still seems to imply that function minimization isn’t much harder than NP-complete problems.
Are there any problems in PostBQP that can be said to be much harder than NP-complete problems? I guess you asked that already.
There was a dumb comment here, I deleted it. You’re actually completely right, thanks!
I can’t read those links, but determining whether an input is a minimum seems to be in co-NP because it allows easy verification of a “no” answer by counterexample. So have those people proved that NP=co-NP, or am I just being dense?
Yep, as a programmer I should’ve said “at most polynomially harder than a problem in NP”. You’re right that my wording was bad. I still stand by the spirit, though :-)
I’m a bit confused too, but I found a Ph.D. thesis that answers a bunch of these questions. I’m still reading it.
On page 5, it says that for TSP, the question “Is the optimal value equal to k” is D^P-complete (which is something I haven’t heard of before).
You can’t get the exact answer, but you can approach it exponentially quickly by doing a binary search. So, if you want N digits of accuracy, you’re going to need O(N) time. Someone mentioned this elsewhere but I can’t find the comment now.
Your method above would work better, actually (assuming the function is valued from 0 to 1). Just randomly guess x, compute f(x)^n, then place a qubit in a superposition such that it is f(x)^n likely to be 0, and 1 - f(x)^n likely to be 1. Measure the qubit. If it is 0, quantum suicide. You can do the whole thing a few times, taking n = 10, n = 1000, and so on, until you’re satisfied you’ve actually got the minimum. Of course, there’s always the chance there’s a slightly smaller minimum somewhere, so we’re just getting an approximate answer like before, albeit an incredibly good approximate answer.
Yeah, my comment was pretty stupid. Thanks.
It’s a classical point—you can replace the question of “What is the minimal value of f(x) (what is the witness x that gives the minimal value)?” by “Is there a parameter x that gives the value f(x) less than C (what is an example of such x)?”, and then use binary search on C to pinpoint the minimum. Since being able to write down the answer in polynomial time must be a given, you can take the ability to run the search on C in polynomial time for granted.
It’s in FP^NP, not in NP. Only the decision problem for whether a given value C is more than the minimum is in NP, not minimization itself.
Yes, that’s certainly right, thanks. I was careless in assuming that people would just mentally convert everything to decision problems, like I do :-)
ETA: Vladimir, I couldn’t find any proof that NP is strictly contained in PP; is it known? Our thread seems to depend on that question only. It should be true because PP is so freaky powerful, but nothing’s turned up yet.
P⊆NP⊆PP⊆PSPACE, but we don’t even have a proof of P≠PSPACE.
Unconditionally proven strict containments are pretty rare. You’ll probably want to settle for some lesser evidence.
Hah. Thanks, that settles the issue for now: short of Scott Aaronson making an unexpected breakthrough, we have no proof that quantum suicide computing can solve any non-polynomial problems :-)
On the computational power of PP and ⊕P says it gives strong evidence that PP is strictly harder than PH, which contains NP.
According to this FNP is strictly harder than NP.
No idea about PP. Where did PP come up, or how does it relate to FNP?
ETA: Ah, I see. Aaronson’s paper here shows that postselection (which is what our suicide voodoo is) gives you PP. Since postselection also gives you FNP, and since FNP is harder than NP, then we should have that PP is strictly harder than NP.
Jordan, I’m really sorry to inject unneeded rigor into the discussion again, but the statement “FNP is strictly harder than NP” doesn’t work. It makes sense to talk about P=NP because P and NP are both sets of decision problems, but FNP is a set of function problems, so to compare it with NP you have to provide a mapping of some sort. Thus your argument doesn’t prove that PP>NP yet.
For me personally, function problems are exotic beasts and I’d prefer to settle the power of quantum voodoo on decision problems first :-)
There’s a natural partial mapping in my mind, but a rigorous mapping is beyond me. Check out this. Apparently it can be shown that FNP is strictly harder than NP, under certain conditions. I didn’t understand the condition, and whether it was reasonable or not.
Regardless of all this complexity jazz, which has definitely exceeded my grasp, my random dictionary example still demonstrates that certain problems can be solved exponentially faster with postselection than without, even if it doesn’t prove that FNP > NP.
ETA: Coming from a numerics background, decision problems seem like exotic beasts. I’d prefer to settle the power of quantum voodoo on function problems first =P
Consider the following problem, given N:
Create a list with 2^N entries, where each entry is a random number from 0 to 1. What is the smallest entry?
This is a function minimization problem, where the function takes n and returns the n-th element of the list. The cost of computing the function is O(1). There is no way to minimize it, however, without looking at every value, which is O(2^N). With quantum suicide voodoo, however, we can minimize it in O(1).
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.
Good point. Function calls will be O(log(N)).
Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can’t minimize the entire function quickly.
Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve “much harder than NP-complete” problems with quantum voodoo. I don’t think you’ve proved it yet, but if you revise it to somehow talk about “relative cost differences”, it does become somewhat more believable.