The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.
The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.
Not really, without further assumptions. The 2 largest assumptions that are there are:
We are strictly limited to classical computing for the future and that there’s no superconducting materials to help us. Now I have a fairly low probability for superconduction/reversible/quantum computers this century, like on the order of 2-3%. Yet my view is conditional on no x-risk, and assuming 1,000-10,000 years are allowed, then I have 1-epsilon probability on quantum computers and superconductors being developed, and reversible computing more like 10-20%.
We can’t use more energy. Charlie Steiner gives an extreme case, but if we can increase the energy, we can get much better results.
And note that this is disjunctive, that is if one assumption is wrong, your case collapses.
Neither 1 or 2 are related to thermodynamic efficiency of biological cells or hypothetical future nanotech machines. Very unlikely that exotic superconducting/reversible/quantum computing is possible for cell sized computers in a room temperature heat bath environment. Too much entropy to deal with.
My point is your implications only hold if other assumptions hold, not just the efficiency assumption.
Also, error correction codes exist for quantum computers, which deal with the decoherence issue in room temperature you’re talking about, which is why I’m so confident about quantum computers working. Superconductors are also known how they work, link to article here:
How does reversible/quantum computing help with protein transcription or DNA replication? Neither of those exotic computing techniques reduce the fundamental thermodynamic cost of physical bit erasures/copies from what I understand.
Because we get to use the Margolus-Levitin limit, which states:
A quantum system of energy E needs at least a time of h/4e to go from one state to an orthogonal state, where h is the Planck constant (6.626×10−34 J⋅Hz−1[1]) and E is average energy.
This means we get 15 orders of magnitude decrease from your estimates of 1E-19 joules for one bit, which is much better news for nanotech.
I have even better news for total computation limits: 5.4x10^50 operations per second for a kilogram of matter.
The limit for speed is this:
The Margolus–Levitin theorem, named for Norman Margolus and Lev B. Levitin, gives a fundamental limit on quantum computation (strictly speaking on all forms on computation). The processing rate cannot be higher than 6 × 10^33 operations per second per joule of energy.
And since you claimed that computational limits matter for biology, the reason is obvious.
The nice thing about quantum computers is that they’re mostly reversible, ie swaps can always be done with zero energy, until you make a measurement. Once you do, you have to pay the energy cost, which I showed in the last comment. We don’t need anything else here.
The nice thing about quantum computers is that they’re mostly reversible, ie bit erasures can always be done with zero energy,
You seem confused here—reversible computations do not, can not erase/copy bits, all they can do is swap/transfer bits, moving them around within the computational system. Bit erasure is actual transference of the bit entropy into the external environment, outside the bounds of the computational system (which also breaks internal quantum coherence from what I recall, but that’s a side point).
Replication/assembly requires copying bits into (and thus erasing bits from) the external environment. This is fundamentally an irreversible computation.
Admittedly this is me thinking worst case scenario, where no technology can reliably improve the speed of getting to those technologies.
If I had to compute an average case, I’d operationalize the following predictions:
Will a quantum computer be sold to 10,000+ customers with a qubit count of at least 1,000 by 2100? Probability: (15-25%.)
Will superconductors be used in at least 1 grid in Europe, China or the US by 2100? Probability: (10-20%).
Will reversible computers be created by a company with at least $100 million in market cap by 2100? Probability: (1-5%).
Now I’m somewhat pessimistic about reversible computers, as they may not exist, but I think there’s a fair chance of superconductors and quantum computers by 2100.
My understanding is that a true quantum computer would be a (mostly) reversible computer as well, by virtue of quantum circuits being reversible. Measurements aren’t (apparently) reversible, but they are deferrable. Do you mean something like… in practice, quantum computers will be narrowly reversible, but closer to classical computers as a system because they’re forced into many irreversible intermediate steps?
Not really. I’m focused on fully reversible systems here, as they theoretically allow you to reverse errors without dissipating any energy, so the energy stored there can keep on going.
It’s a great advance, and it’s stronger than you think since we don’t need intermediate steps anymore, and I’ll link to the article here:
In retrospect, I agree with you @porby that my estimate was way lower than it should be, and I now think that the chances of reversible computing in total until 2100 is more like 50-90% than 2-3%.
The most relevant pareto-optimality frontiers are computational: biological cells being computationally near optimal in both storage density and thermodynamic efficiency seriously constrains or outright dashes the hopes of nanotech improving much on biotech. This also indirectly relates to brain efficiency.
Not really, without further assumptions. The 2 largest assumptions that are there are:
We are strictly limited to classical computing for the future and that there’s no superconducting materials to help us. Now I have a fairly low probability for superconduction/reversible/quantum computers this century, like on the order of 2-3%. Yet my view is conditional on no x-risk, and assuming 1,000-10,000 years are allowed, then I have 1-epsilon probability on quantum computers and superconductors being developed, and reversible computing more like 10-20%.
We can’t use more energy. Charlie Steiner gives an extreme case, but if we can increase the energy, we can get much better results.
And note that this is disjunctive, that is if one assumption is wrong, your case collapses.
Neither 1 or 2 are related to thermodynamic efficiency of biological cells or hypothetical future nanotech machines. Very unlikely that exotic superconducting/reversible/quantum computing is possible for cell sized computers in a room temperature heat bath environment. Too much entropy to deal with.
My point is your implications only hold if other assumptions hold, not just the efficiency assumption.
Also, error correction codes exist for quantum computers, which deal with the decoherence issue in room temperature you’re talking about, which is why I’m so confident about quantum computers working. Superconductors are also known how they work, link to article here:
https://www.quantamagazine.org/high-temperature-superconductivity-understood-at-last-20220921/
Link to article here:
https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/
And the actual study:
https://arxiv.org/abs/2111.03654
How does reversible/quantum computing help with protein transcription or DNA replication? Neither of those exotic computing techniques reduce the fundamental thermodynamic cost of physical bit erasures/copies from what I understand.
Because we get to use the Margolus-Levitin limit, which states:
This means we get 15 orders of magnitude decrease from your estimates of 1E-19 joules for one bit, which is much better news for nanotech.
I have even better news for total computation limits: 5.4x10^50 operations per second for a kilogram of matter.
The limit for speed is this:
And since you claimed that computational limits matter for biology, the reason is obvious.
A link to the Margolus-Levitin theorem:
https://en.m.wikipedia.org/wiki/Margolus–Levitin_theorem
In the fully reversible case, the answer is zero energy is expended.
That doesn’t help with bit erasures and is thus irrelevant to what I’m discussing—the physical computations cells must perform.
The nice thing about quantum computers is that they’re mostly reversible, ie swaps can always be done with zero energy, until you make a measurement. Once you do, you have to pay the energy cost, which I showed in the last comment. We don’t need anything else here.
Thanks to porby for mentioning this.
You seem confused here—reversible computations do not, can not erase/copy bits, all they can do is swap/transfer bits, moving them around within the computational system. Bit erasure is actual transference of the bit entropy into the external environment, outside the bounds of the computational system (which also breaks internal quantum coherence from what I recall, but that’s a side point).
Replication/assembly requires copying bits into (and thus erasing bits from) the external environment. This is fundamentally an irreversible computation.
Could you elaborate on this? I’m pretty surprised by an estimate that low conditioned on ~normalcy/survival, but I’m no expert.
Admittedly this is me thinking worst case scenario, where no technology can reliably improve the speed of getting to those technologies.
If I had to compute an average case, I’d operationalize the following predictions:
Will a quantum computer be sold to 10,000+ customers with a qubit count of at least 1,000 by 2100? Probability: (15-25%.)
Will superconductors be used in at least 1 grid in Europe, China or the US by 2100? Probability: (10-20%).
Will reversible computers be created by a company with at least $100 million in market cap by 2100? Probability: (1-5%).
Now I’m somewhat pessimistic about reversible computers, as they may not exist, but I think there’s a fair chance of superconductors and quantum computers by 2100.
Thanks!
My understanding is that a true quantum computer would be a (mostly) reversible computer as well, by virtue of quantum circuits being reversible. Measurements aren’t (apparently) reversible, but they are deferrable. Do you mean something like… in practice, quantum computers will be narrowly reversible, but closer to classical computers as a system because they’re forced into many irreversible intermediate steps?
Not really. I’m focused on fully reversible systems here, as they theoretically allow you to reverse errors without dissipating any energy, so the energy stored there can keep on going.
It’s a great advance, and it’s stronger than you think since we don’t need intermediate steps anymore, and I’ll link to the article here:
https://www.quantamagazine.org/computer-scientists-eliminate-pesky-quantum-computations-20220119/
But I’m focused on full reversibility, ie the measurement step can’t be irreversible.
In retrospect, I agree with you @porby that my estimate was way lower than it should be, and I now think that the chances of reversible computing in total until 2100 is more like 50-90% than 2-3%.