Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover’s. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved. Alternately, if you measure a fixed problem size, then by assumption speed doubles for both. So where does 4x come from?
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it’s not uniform—it doesn’t apply to problems in P.
With classical computers Moore’s law improves serial and parallel performance simulataneously—by making components smaller.
With quantum computers serial and parallel performance are decoupled—more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither—AFAICS.
Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover’s. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved.
Alternately, if you measure a fixed problem size, then by assumption speed doubles for both.
So where does 4x come from?
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it’s not uniform—it doesn’t apply to problems in P.
With classical computers Moore’s law improves serial and parallel performance simulataneously—by making components smaller.
With quantum computers serial and parallel performance are decoupled—more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither—AFAICS.
Sorry, my original response should have been “yes, you aren’t getting into the spirit of the counterfactual.”