Huh. How much stock do you put in extrapolating trends in qubit count like that last link? I would assume that the tradeoff they see between chip size and quality is because of selection / publication bias, not manufacturing process per se. This means that once the un-selected manufacturing process can make error-corrected circuits, there’s capacity for a steep rise in investment and circuit size.
I do believe that the tradeoff is real, and has a very clear physical reason—larger chips require more gates to perform a single quantum operation, so if the topology of the chip is roughly the same then I expect the fidelity needed to prevent errors to increase drastically with size.
In any case, note that in the extrapolation we basically assumed that the tradeoff didn’t exist, so I expect the predictions to be pessimistic.
A larger question is whether these kind of historical extrapolations work at all. In fact, a large part of why I wrote this paper is precisely because I want to test this.
On this I am cautiously optimistic for hard-to-verbalize reasons.
Re: quantum computing.
I am bearish it will be a big deal in relation to AI.
This is because:
The exponential speedups are very hard to achieve in practice.
The quadratic speedups are mostly lost when you parallelize.
At the current rate of progress and barring a breakthrough it seems it will be a couple of decades until we have useful quantum computing.
Huh. How much stock do you put in extrapolating trends in qubit count like that last link? I would assume that the tradeoff they see between chip size and quality is because of selection / publication bias, not manufacturing process per se. This means that once the un-selected manufacturing process can make error-corrected circuits, there’s capacity for a steep rise in investment and circuit size.
I do believe that the tradeoff is real, and has a very clear physical reason—larger chips require more gates to perform a single quantum operation, so if the topology of the chip is roughly the same then I expect the fidelity needed to prevent errors to increase drastically with size.
In any case, note that in the extrapolation we basically assumed that the tradeoff didn’t exist, so I expect the predictions to be pessimistic.
A larger question is whether these kind of historical extrapolations work at all. In fact, a large part of why I wrote this paper is precisely because I want to test this.
On this I am cautiously optimistic for hard-to-verbalize reasons.
I think the most legible reason is that technological discontinuities are somewhat rare in practice.
The hard-to-verbalize reasons are… points at the whole Eliezer vs OpenPhil and Christiano debate.
Did you mean bearish?
I keep making this mistake facepalm