That isn’t directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.
Hence why it’s an answer to a question called “Does biology reliably find the global maximum, or at least get close?” :P
By analogy, I think it is in fact correct for brains as well. Brains don’t use quantum computing or reversible computing, so they’re very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.
If you’re talking about decoherence issues, that’s solvable with error correcting codes, and we now have a proof that it’s possible to completely solve the decoherence problem via quantum error correcting codes.
I’m referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn’t enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other.
It’s not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain.
If the size is scaled down to reduce the distances another problem arises in that there’s a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient.
Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts.
But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it’s impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit.
There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a ‘complete’ solution.
As perfectly lossless information transmission is only an ideal and not achievable in practice.
One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue.
However, this was solved a year ago, at least in theory.
It also solves the decoherence problem, which allows in theory room temperature computers. It’s at least a possibility proof.
Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Yeah, that’s the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.
Hence why it’s an answer to a question called “Does biology reliably find the global maximum, or at least get close?” :P
By analogy, I think it is in fact correct for brains as well. Brains don’t use quantum computing or reversible computing, so they’re very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.
Neither of the alternatives have been proven to work at scale though?
In fact there are still theoretical hurdles for a human brain-size implementation in either case that have not been fully addressed in the literature.
Go on, what are some of the theoretical hurdles for a brain-scale quantum computer?
Interconnections between an enormous number of qubits?
If you’re talking about decoherence issues, that’s solvable with error correcting codes, and we now have a proof that it’s possible to completely solve the decoherence problem via quantum error correcting codes.
Link to article here:
https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/
Link to study:
https://arxiv.org/abs/2111.03654
I’m referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn’t enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other.
It’s not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain.
If the size is scaled down to reduce the distances another problem arises in that there’s a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient.
Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts.
But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it’s impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit.
There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a ‘complete’ solution.
As perfectly lossless information transmission is only an ideal and not achievable in practice.
One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue.
However, this was solved a year ago, at least in theory.
It also solves the decoherence problem, which allows in theory room temperature computers. It’s at least a possibility proof.
The article’s link is here:
https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/
And the actual paper is here:
https://arxiv.org/abs/2111.03654
Other than that, the problems are all practical.
Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Basically, the following properties hold for this code (I’m trusting quanta magazine to report the study correctly)
It is efficient like classical codes.
It can correct many more errors than previous codes.
It has constant ability to suppress errors, no matter how large the sequence of bits you’ve started with.
It sums up a very low number of bits/qubits, called the LDPC property in the quanta article.
It has local testability, that is errors can’t hide themselves, and any check can reveal a large proportion of errors, evading Goodhart’s Law.
Yeah, that’s the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.