This article is marked as controversial and has been locked, see talk page for details.
Quantum computing winter
The Quantum computing winter was the period from 1995 to approximately October 2031 when experimental progress on the creation of fault tolerant quantum computers stalled despite significant effort at constructing the machines. The era ended with the publication of the Kitaev-Kalai-Alicki-Preskill (KKAP) theorem in early 2030 which purported to show that the construction of fault-tolerant quantum computers was in fact impossible due to fundamental constraints. The theorem was not widely accepted until experiments performed by Mikhail Lukin’s group in early 2031 verified the bounds provided in the KKAP theorem.
Early history
Quantum computing technology looked promising in the late 20th and early 21st century due to the celebrated Fault Tolerance theorems, as well as the rapid experimental progress towards satisfying the fault tolerance threshold. The Fault Tolerance theorem, which at the time was thought to be based on reasonable assumptions, guaranteed scalable, fault tolerant quantum computation could be performed—provided an architecture could be built that had an error rate smaller than a known bound.
In the early 2010s, superconducting qubit architectures designed by John Martinis’ group at Google, and then HYPER Inc., looked poised to satisfy the threshold theorems, and considerable work was done to build scaled architectures with many millions of physical qubits by the mid 2020s.
However, despite what seemed to be guarantees via threshold theorems for their architectures, the Martinis group was never able to report large concurrences for more than 12 (disputed) logical qubits.
The scalability wall
Parallel to the development of the scalable, silicon architectures, many groups continued work on other traditional schemes like neutral atoms, trapped ions, and Nuclear Magnetic Resonance (NMR) based devices. These devices, in turn, ran into the now named Scalability Wall of 12 (disputed) entangled encoded qubits. For a discussion on the difference between encoded and physical qubits, see the discussion in Quantum error correction.
The Martinis group hoped that polishing their hardware, and scaling the size of their error correction schemes would allow them to surpass the limit, but progress stalled for more than a decade.
Correlated noise catastrophe
Alexei Kitaev, building on earlier work by Gil Kalai, Robert Alicki, and John Preskill published a series of papers in the late 2020s, culminating in the 2030 theorem now known as the KKAP Theorem, or the Noise Catastrophe Theorem. This proof traced how fundamental limits on the noise experienced by quantum mechanical objects irretrievably destroys the controllability of quantum systems beyond only a few qubits. Uncontrollable correlations were shown to arise in any realistic noise model, essentially disproving the possibility of large scale quantum computation.
Aftermath (This section has been marked as controversial, see the talk page for details)
The immediate aftermath of the publication of the proof was disbelief. Almost all indications pointed towards scalable quantum computation being possible, and that only engineering problems stood in the way of truly scalable quantum computation. The Nobel Prize (2061) winning work of Mikhail Lukin’s team at Harvard only reinforced the shock felt by the Quantum Information community when the bounds provided in the KKAP Theorem’s proof were explicitly saturated via cold atom experiments. Funding in quantum information science rapidly dwindled in the following years, and the field of Quantum Information was nearly abandoned. The field has since been reinvigorated by Kitaev’s recent proof of the possibility of Quantum Gravitational computers in 2061.
I don’t think we know anywhere near enough about quantum gravity to be sure of that.
Not that I’d be super-optimistic about “quantum gravitational computers” actually being any use relative to ordinary quantum computers—but in the absence of an actual working quantum theory of gravity I don’t see how we can know they wouldn’t make a difference in calef’s hypothetical world.
We actually know quite a bit about quantum gravity: it must fall under a quantum mechanical framework, and it needs to result in gravity, and gravitons haven’t been directly detected yet. This isn’t enough to determine what the theory is, but it is enough to say some things about it. The main two things are:
1: Since it’s just quantum mechanics, whatever it does, it’ll just set another Hamiltonian. If it changes the ground rules, then it’s not a theory of quantum gravity. It’s a theory of something else gravity.
2: Gravity is weak. Ridiculously weak. Simply getting the states to not mush up into a continuum will be more difficult by a factor for which ‘billions of times’ would be a drastic understatement.
In order for gravity to be even noticeable, let alone the main driver of action, you either need to have really really enormous amounts of stuff, or things have to be insanely high energy and short-ranged and short-lived (unification energies).
Either of these would utterly murder coherence. In the former case your device would be big enough (and/or slow enough) that even neutrino collisions would decohere it fairly comprehensively long before the first operation could complete. In the latter case your computer is exploding at nearly the speed of light every time you turn it on and incidentally requires a particle accelerator that makes CERN look like 5V power cable,
So, everything that makes gravity different from electromagnetism makes it much much worse for computing.
Not that I actually believe most of what I wrote above (just that it hasn’t yet been completely excluded), if QG introduced small nonlinearities to quantum mechanics, funthingscouldhappen, like superluminal signaling as well as the ability to solve NP-Complete and P#-Complete problems in polynomial time (which is probably better seen as a reason to believe that QG won’t have a nonlinearity).
Nonlinearities in quantum mechanics? Linearity is what makes quantum mechanics amplitude-independent. If you ruin that, then the laws of nature will change from moment to moment as the wavefunction moves to fill more and more of Fock space. Suffice it to say, QM’s leading order is 1, and any higher powers are way out of reach.
Unless, that is, worlds are top-level entities in your physical theory somehow, which then brings in the full weight of the ‘what does it have to do, kill a puppy’ rant against it.
This article is marked as controversial and has been locked, see talk page for details.
Quantum computing winter
The Quantum computing winter was the period from 1995 to approximately October 2031 when experimental progress on the creation of fault tolerant quantum computers stalled despite significant effort at constructing the machines. The era ended with the publication of the Kitaev-Kalai-Alicki-Preskill (KKAP) theorem in early 2030 which purported to show that the construction of fault-tolerant quantum computers was in fact impossible due to fundamental constraints. The theorem was not widely accepted until experiments performed by Mikhail Lukin’s group in early 2031 verified the bounds provided in the KKAP theorem.
Early history
Quantum computing technology looked promising in the late 20th and early 21st century due to the celebrated Fault Tolerance theorems, as well as the rapid experimental progress towards satisfying the fault tolerance threshold. The Fault Tolerance theorem, which at the time was thought to be based on reasonable assumptions, guaranteed scalable, fault tolerant quantum computation could be performed—provided an architecture could be built that had an error rate smaller than a known bound.
In the early 2010s, superconducting qubit architectures designed by John Martinis’ group at Google, and then HYPER Inc., looked poised to satisfy the threshold theorems, and considerable work was done to build scaled architectures with many millions of physical qubits by the mid 2020s.
However, despite what seemed to be guarantees via threshold theorems for their architectures, the Martinis group was never able to report large concurrences for more than 12 (disputed) logical qubits.
The scalability wall
Parallel to the development of the scalable, silicon architectures, many groups continued work on other traditional schemes like neutral atoms, trapped ions, and Nuclear Magnetic Resonance (NMR) based devices. These devices, in turn, ran into the now named Scalability Wall of 12 (disputed) entangled encoded qubits. For a discussion on the difference between encoded and physical qubits, see the discussion in Quantum error correction.
The Martinis group hoped that polishing their hardware, and scaling the size of their error correction schemes would allow them to surpass the limit, but progress stalled for more than a decade.
Correlated noise catastrophe
Alexei Kitaev, building on earlier work by Gil Kalai, Robert Alicki, and John Preskill published a series of papers in the late 2020s, culminating in the 2030 theorem now known as the KKAP Theorem, or the Noise Catastrophe Theorem. This proof traced how fundamental limits on the noise experienced by quantum mechanical objects irretrievably destroys the controllability of quantum systems beyond only a few qubits. Uncontrollable correlations were shown to arise in any realistic noise model, essentially disproving the possibility of large scale quantum computation.
Aftermath (This section has been marked as controversial, see the talk page for details)
The immediate aftermath of the publication of the proof was disbelief. Almost all indications pointed towards scalable quantum computation being possible, and that only engineering problems stood in the way of truly scalable quantum computation. The Nobel Prize (2061) winning work of Mikhail Lukin’s team at Harvard only reinforced the shock felt by the Quantum Information community when the bounds provided in the KKAP Theorem’s proof were explicitly saturated via cold atom experiments. Funding in quantum information science rapidly dwindled in the following years, and the field of Quantum Information was nearly abandoned. The field has since been reinvigorated by Kitaev’s recent proof of the possibility of Quantum Gravitational computers in 2061.
“and we’re back at square one”
Meh. If quantum gravity could do it, then any other quantum force could do it.
I don’t think we know anywhere near enough about quantum gravity to be sure of that.
Not that I’d be super-optimistic about “quantum gravitational computers” actually being any use relative to ordinary quantum computers—but in the absence of an actual working quantum theory of gravity I don’t see how we can know they wouldn’t make a difference in calef’s hypothetical world.
We actually know quite a bit about quantum gravity: it must fall under a quantum mechanical framework, and it needs to result in gravity, and gravitons haven’t been directly detected yet. This isn’t enough to determine what the theory is, but it is enough to say some things about it. The main two things are:
1: Since it’s just quantum mechanics, whatever it does, it’ll just set another Hamiltonian. If it changes the ground rules, then it’s not a theory of quantum gravity. It’s a theory of something else gravity.
2: Gravity is weak. Ridiculously weak. Simply getting the states to not mush up into a continuum will be more difficult by a factor for which ‘billions of times’ would be a drastic understatement.
In order for gravity to be even noticeable, let alone the main driver of action, you either need to have really really enormous amounts of stuff, or things have to be insanely high energy and short-ranged and short-lived (unification energies).
Either of these would utterly murder coherence. In the former case your device would be big enough (and/or slow enough) that even neutrino collisions would decohere it fairly comprehensively long before the first operation could complete. In the latter case your computer is exploding at nearly the speed of light every time you turn it on and incidentally requires a particle accelerator that makes CERN look like 5V power cable,
So, everything that makes gravity different from electromagnetism makes it much much worse for computing.
Not that I actually believe most of what I wrote above (just that it hasn’t yet been completely excluded), if QG introduced small nonlinearities to quantum mechanics, fun things could happen, like superluminal signaling as well as the ability to solve NP-Complete and P#-Complete problems in polynomial time (which is probably better seen as a reason to believe that QG won’t have a nonlinearity).
Nonlinearities in quantum mechanics? Linearity is what makes quantum mechanics amplitude-independent. If you ruin that, then the laws of nature will change from moment to moment as the wavefunction moves to fill more and more of Fock space. Suffice it to say, QM’s leading order is 1, and any higher powers are way out of reach.
Unless, that is, worlds are top-level entities in your physical theory somehow, which then brings in the full weight of the ‘what does it have to do, kill a puppy’ rant against it.