1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
Good point. Function calls will be O(log(N)).
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can’t minimize the entire function quickly.
The point is the relative cost difference between function evaluations and function minimization.
Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve “much harder than NP-complete” problems with quantum voodoo. I don’t think you’ve proved it yet, but if you revise it to somehow talk about “relative cost differences”, it does become somewhat more believable.
1) The problem of finding the smallest entry in a list is linear-time with respect to the size of the input; calling that size 2^N instead of M doesn’t change things.
2) Accessing the nth element of a list isn’t O(1), because you have to read the bits of n for chrissake.
3) I’m not sure how you’re going to solve this with quantum voodoo in O(1), because just setting up the computation will take time (or space if you’re parallel) proportional to the length of the input list.
You can call it M, if you like. Then individual function evaluations cost log(log(M)). The point is the relative cost difference between function evaluations and function minimization.
Good point. Function calls will be O(log(N)).
Right, the setup of the problem is separate. It could have been handed to you on a flash drive. The point is there exist functions such that we can calculate single evaluations quickly, but can’t minimize the entire function quickly.
Well yeah, but the definitions of P and NP involve the size of the input. Your original claim was that we can solve “much harder than NP-complete” problems with quantum voodoo. I don’t think you’ve proved it yet, but if you revise it to somehow talk about “relative cost differences”, it does become somewhat more believable.