I’m not sure the halved doubling time for quantum computers is right.
Maybe I’m not getting into the spirit of accepting the proposed counterfactuals—but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.
It’s a purely theoretical counterfactual about the combination of Moore’s law and Grover’s algorithm.
Moore’s law says that the computer becomes twice as efficient in 18 months. Grover’s algorithm says that the time taken by a quantum computer to solve SAT is the square root of the time required by a classical computer. Thus in 18 months, Moore’s law of hardware should make the quantum computer 4 times as fast.
Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover’s. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved. Alternately, if you measure a fixed problem size, then by assumption speed doubles for both. So where does 4x come from?
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it’s not uniform—it doesn’t apply to problems in P.
With classical computers Moore’s law improves serial and parallel performance simulataneously—by making components smaller.
With quantum computers serial and parallel performance are decoupled—more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither—AFAICS.
Well, I can see what math was done. The problem is the false assertion. I learned in math classes that if you accept one false thing, you can prove everything, and consequently your understanding of the difference between what’s true and what’s not dwindles to zero. You can’t just believe one false thing.
If we actually “switched to quantum computers” it isn’t clear we would get an exponential trajectory at all—due to the proximity of physical limits. If we did get an exponential trajectory, I can see no coherent reason for thinking the doubling time would relate to that of classical computers—because the technology is quite different. Currently, quantum computers grow mostly by adding qubits—not by the shrinking in component size that drives Moore’s law in classical computers. That increases their quantum-parallelism, but doesn’t affect their speed.
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore’s law.
Quantum computing performance currently isn’t doubling but it isn’t jammed either. Decoherence is no longer considered to be a fundamental limit, it’s more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.
However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore’s law type growth.
I’m not sure the halved doubling time for quantum computers is right.
Maybe I’m not getting into the spirit of accepting the proposed counterfactuals—but is quantum computer performance doubling regularly at all? It seems more as though it is jammed up against decoherence problems already.
It’s a purely theoretical counterfactual about the combination of Moore’s law and Grover’s algorithm.
Moore’s law says that the computer becomes twice as efficient in 18 months. Grover’s algorithm says that the time taken by a quantum computer to solve SAT is the square root of the time required by a classical computer. Thus in 18 months, Moore’s law of hardware should make the quantum computer 4 times as fast.
Assume the number of quantum gate-ops per second doubles every 18 months. Assume SAT is O(2^n) on a classical computer and O(2^(n/2)) by Grover’s. Then the maximum feasible problem size on a classical computer increases by 1 every 18 months, and on a quantum computer increases by 2. No factors of anything involved.
Alternately, if you measure a fixed problem size, then by assumption speed doubles for both.
So where does 4x come from?
It just comes from treating classical computers as the correct measuring stick. It would be more precise to refer, as you do, to 18 months as the add one time than the doubling time. But if you do call it the doubling time, then for quantum computers, it becomes the 4x time. Of course, it’s not uniform—it doesn’t apply to problems in P.
With classical computers Moore’s law improves serial and parallel performance simulataneously—by making components smaller.
With quantum computers serial and parallel performance are decoupled—more qubits improves parallel performance and minaturisation has no effect on the number of qubits, but improves serial processing performance. So, there are two largely independent means of speeding up quantum computing. Which one supposedly doubles twice as fast as classical computers? Neither—AFAICS.
Sorry, my original response should have been “yes, you aren’t getting into the spirit of the counterfactual.”
Well, I can see what math was done. The problem is the false assertion. I learned in math classes that if you accept one false thing, you can prove everything, and consequently your understanding of the difference between what’s true and what’s not dwindles to zero. You can’t just believe one false thing.
If we actually “switched to quantum computers” it isn’t clear we would get an exponential trajectory at all—due to the proximity of physical limits. If we did get an exponential trajectory, I can see no coherent reason for thinking the doubling time would relate to that of classical computers—because the technology is quite different. Currently, quantum computers grow mostly by adding qubits—not by the shrinking in component size that drives Moore’s law in classical computers. That increases their quantum-parallelism, but doesn’t affect their speed.
I guess that quantum computers halve the doubling time, as compared to a classical computer, because every extra qubit squares the available state space. This could give the factor two in the exponential of Moore’s law.
Quantum computing performance currently isn’t doubling but it isn’t jammed either. Decoherence is no longer considered to be a fundamental limit, it’s more a practical inconvenience. The change that brought this about was the invention of quantum error correcting codes.
However experimental physicists are still searching for the ideal practical implementation. You might compare the situation to that of the pre-silicon days of classical computing. Until this gets sorted I doubt there will be any Moore’s law type growth.
I looked at:
http://en.wikipedia.org/wiki/Quantum_error_correction
The bit about the threshold theorem looks interesting.
However, I would be more impressed by a working implementation ;-)