From what I understand, if you chill everything down then you also change resistance in the semiconductor along with all the other properties, so it probably isn’t as easy as just replacing the copper wires.
From the sources I’ve read, there aren’t any major issues running CMOS at 77 K, you only run into problems at lower temperatures, less than 40 K. I guess people aren’t seriously trying this because it’s probably not much harder to go directly to full superconducting computers (i.e., with logic gates made out of superconductors as well) which offers a lot more benefits. Here is an article about a major IARPA project pursuing that. It doesn’t seem safe to assume that we’ll get AGI before we get superconducting computers. Do you disagree, if so can you explain why?
There was similar interest in superconducting chips about a decade ago which was pretty much the same story—DARPA/IARPA spearheading research, major customer would be US intelligence.
The 500 gigaflops per watt figure is about 100 times more computation/watt than on a current GPU—which is useful because it shows that about 99% of GPU energy cost is interconnect/wiring.
In terms of viability and impact, it is still uncertain how much funding superconducting circuits will require to become competitive. And even if it is competitive in some markets for say the NSA, that doesn’t make it competitive for general consumer markets. Cryogenic cooling means these things will only work in very special data rooms—so the market is more niche.
The bigger issue though is total cost competitiveness. GPUs are sort of balanced in that the energy cost is about half of the TCO (total cost of ownership). It is extremely unlikely that superconducting chips will be competitive in total cost of computation in the near future. All the various tradeoffs in a superconducting design and the overall newness of the tech imply lower circuit densities. Smaller market implies less research amortization and higher costs. Even if a superconducting chip used 0 energy, it will still be much more expensive and provide less ops/$.
Once we run out of scope for further CPU/GPU improvements over the next decade, then the TCO budget will shift increasingly towards energy, and these types of chips will become increasing viable. So I’d estimate that the probability of impact in the next 5 years is small, but 10 years or more out it’s harder to say. To make a more viable forecast I’d need to read more on this tech and understand more about the costs of cryogenic cooling.
But really roughly—the net effect of this could be to add another leg to moore’s law style growth, at least for server computation.
I guess people aren’t seriously trying this because it’s probably not much harder to go directly to full superconducting computers (i.e., with logic gates made out of superconductors as well) which offers a lot more benefits
It takes energy to maintain cryogenic temperatures, probably much more than the energy that would be saved by eliminating wire resistance. If I understand correctly, the interest in superconducting circuits is mostly in using them to implement quantum computation. Barring room temperature superconductors, there are probably no benefits of using superconducting circuits for classical computation.
Studies indicate the technology, which uses low temperatures in the 4-10 kelvin range to enable information to be transmitted with minimal energy loss, could yield one-petaflop systems that use just 25 kW and 100 petaflop systems that operate at 200 kW, including the cryogenic cooler. Compare this to the current greenest system, the L-CSC supercomputer from the GSI Helmholtz Center, which achieved 5.27 gigaflops-per-watt on the most-recent Green500 list. If scaled linearly to an exaflop supercomputing system, it would consume about 190 megawatts (MW), still quite a bit short of DARPA targets, which range from 20MW to 67MW.
ETA: 100 petaflops per 200 kW equals 500 gigaflops per watt, so it’s estimated to be about 100 times more energy efficient.
From the sources I’ve read, there aren’t any major issues running CMOS at 77 K, you only run into problems at lower temperatures, less than 40 K. I guess people aren’t seriously trying this because it’s probably not much harder to go directly to full superconducting computers (i.e., with logic gates made out of superconductors as well) which offers a lot more benefits. Here is an article about a major IARPA project pursuing that. It doesn’t seem safe to assume that we’ll get AGI before we get superconducting computers. Do you disagree, if so can you explain why?
There was similar interest in superconducting chips about a decade ago which was pretty much the same story—DARPA/IARPA spearheading research, major customer would be US intelligence.
The 500 gigaflops per watt figure is about 100 times more computation/watt than on a current GPU—which is useful because it shows that about 99% of GPU energy cost is interconnect/wiring.
In terms of viability and impact, it is still uncertain how much funding superconducting circuits will require to become competitive. And even if it is competitive in some markets for say the NSA, that doesn’t make it competitive for general consumer markets. Cryogenic cooling means these things will only work in very special data rooms—so the market is more niche.
The bigger issue though is total cost competitiveness. GPUs are sort of balanced in that the energy cost is about half of the TCO (total cost of ownership). It is extremely unlikely that superconducting chips will be competitive in total cost of computation in the near future. All the various tradeoffs in a superconducting design and the overall newness of the tech imply lower circuit densities. Smaller market implies less research amortization and higher costs. Even if a superconducting chip used 0 energy, it will still be much more expensive and provide less ops/$.
Once we run out of scope for further CPU/GPU improvements over the next decade, then the TCO budget will shift increasingly towards energy, and these types of chips will become increasing viable. So I’d estimate that the probability of impact in the next 5 years is small, but 10 years or more out it’s harder to say. To make a more viable forecast I’d need to read more on this tech and understand more about the costs of cryogenic cooling.
But really roughly—the net effect of this could be to add another leg to moore’s law style growth, at least for server computation.
It takes energy to maintain cryogenic temperatures, probably much more than the energy that would be saved by eliminating wire resistance. If I understand correctly, the interest in superconducting circuits is mostly in using them to implement quantum computation.
Barring room temperature superconductors, there are probably no benefits of using superconducting circuits for classical computation.
From the article I linked to:
ETA: 100 petaflops per 200 kW equals 500 gigaflops per watt, so it’s estimated to be about 100 times more energy efficient.
Ok, I guess it depends on how big your computer is, due to the square-cube law. Bigger computers would be at an advantage.