There was similar interest in superconducting chips about a decade ago which was pretty much the same story—DARPA/IARPA spearheading research, major customer would be US intelligence.
The 500 gigaflops per watt figure is about 100 times more computation/watt than on a current GPU—which is useful because it shows that about 99% of GPU energy cost is interconnect/wiring.
In terms of viability and impact, it is still uncertain how much funding superconducting circuits will require to become competitive. And even if it is competitive in some markets for say the NSA, that doesn’t make it competitive for general consumer markets. Cryogenic cooling means these things will only work in very special data rooms—so the market is more niche.
The bigger issue though is total cost competitiveness. GPUs are sort of balanced in that the energy cost is about half of the TCO (total cost of ownership). It is extremely unlikely that superconducting chips will be competitive in total cost of computation in the near future. All the various tradeoffs in a superconducting design and the overall newness of the tech imply lower circuit densities. Smaller market implies less research amortization and higher costs. Even if a superconducting chip used 0 energy, it will still be much more expensive and provide less ops/$.
Once we run out of scope for further CPU/GPU improvements over the next decade, then the TCO budget will shift increasingly towards energy, and these types of chips will become increasing viable. So I’d estimate that the probability of impact in the next 5 years is small, but 10 years or more out it’s harder to say. To make a more viable forecast I’d need to read more on this tech and understand more about the costs of cryogenic cooling.
But really roughly—the net effect of this could be to add another leg to moore’s law style growth, at least for server computation.
There was similar interest in superconducting chips about a decade ago which was pretty much the same story—DARPA/IARPA spearheading research, major customer would be US intelligence.
The 500 gigaflops per watt figure is about 100 times more computation/watt than on a current GPU—which is useful because it shows that about 99% of GPU energy cost is interconnect/wiring.
In terms of viability and impact, it is still uncertain how much funding superconducting circuits will require to become competitive. And even if it is competitive in some markets for say the NSA, that doesn’t make it competitive for general consumer markets. Cryogenic cooling means these things will only work in very special data rooms—so the market is more niche.
The bigger issue though is total cost competitiveness. GPUs are sort of balanced in that the energy cost is about half of the TCO (total cost of ownership). It is extremely unlikely that superconducting chips will be competitive in total cost of computation in the near future. All the various tradeoffs in a superconducting design and the overall newness of the tech imply lower circuit densities. Smaller market implies less research amortization and higher costs. Even if a superconducting chip used 0 energy, it will still be much more expensive and provide less ops/$.
Once we run out of scope for further CPU/GPU improvements over the next decade, then the TCO budget will shift increasingly towards energy, and these types of chips will become increasing viable. So I’d estimate that the probability of impact in the next 5 years is small, but 10 years or more out it’s harder to say. To make a more viable forecast I’d need to read more on this tech and understand more about the costs of cryogenic cooling.
But really roughly—the net effect of this could be to add another leg to moore’s law style growth, at least for server computation.