No animals do nuclear fusion to extract energy from their food, meaning that they’re about 11 orders of magnitude off from the optimal use of matter.
The inverted vs. everted retina thing is interesting, and it makes sense that there are space-and-mass-saving advantages to putting neurons inside the eye, especially if your retinas are a noticeable fraction of your weight (hence the focus on “small, highly-visual species”). But it seems like for humans in particular having an everted retina would likely be better “The results from modelling nevertheless indicate clearly that the inverted retina offers a space-saving advantage that is large in small eyes and substantial even in relatively large eyes. The advantage also increases with increasingly complex retinal processing and thus increasing retinal thickness. [...] Only in large-eyed species, the scattering effect of the inverted retina may indeed pose a disadvantage and the everted retina of cephalopods may be superior, although it also has its problems.” (Kroger and Biehlmaher 20019)
But anyhow, which way around my vs. octopuses’ retinas are isn’t that big a mistake either way—certainly not an order of magnitude.
To get that big of an obvious failure you might have to go to more extreme stuff like the laryngeal nerve of the giraffe. Or maybe scurvy in humans.
Overall, [shrug]. Evolution’s really good at finding solutions but it’s really path-dependent. I expect it to be better than human engineering in plenty of ways, but there are plenty of ways the actual global optimum is way too weird to be found by evolution.
No animals do nuclear fusion to extract energy from their food, meaning that they’re about 11 orders of magnitude off from the optimal use of matter.
That isn’t directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.
Nuclear fusion may simply be impossible to realistically harness by a cell sized machine self assembled out of common elements.
That isn’t directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.
Hence why it’s an answer to a question called “Does biology reliably find the global maximum, or at least get close?” :P
By analogy, I think it is in fact correct for brains as well. Brains don’t use quantum computing or reversible computing, so they’re very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.
If you’re talking about decoherence issues, that’s solvable with error correcting codes, and we now have a proof that it’s possible to completely solve the decoherence problem via quantum error correcting codes.
I’m referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn’t enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other.
It’s not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain.
If the size is scaled down to reduce the distances another problem arises in that there’s a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient.
Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts.
But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it’s impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit.
There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a ‘complete’ solution.
As perfectly lossless information transmission is only an ideal and not achievable in practice.
One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue.
However, this was solved a year ago, at least in theory.
It also solves the decoherence problem, which allows in theory room temperature computers. It’s at least a possibility proof.
Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Yeah, that’s the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.
The article I linked argues that the inverted retina is near optimal, if you continue reading . ..
The scattering effects are easily compensated for:
Looking out through a layer of neural tissue may seem to be a serious drawback for vertebrate vision. Yet, vertebrates include birds of prey with the most acute vision of any animal, and even in general, vertebrate visual acuity is typically limited by the physics of light, and not by retinal imperfections.
So, in general, the apparent challenges with an inverted retina seem to have been practically abolished by persistent evolutionary tweaking. In addition, opportunities that come with the inverted retina have been efficiently seized. In terms of performance, vertebrate eyes come close to perfect.
The everted retina has an issue :
A challenge that comes with the everted retina is to find suitable space for early neural processing. The solution seems to have been to make an absolute minimum of early processing in the retina: photoreceptor axons project straight to the optic lobes, which lie directly behind the eyes.
The inverted retina with internal space for extensive retinal circuitry performs high efficiency video compression (roughly equivalent to H.264 in compression rate), which enormously reduces the space and energy expensive wiring requirement for video output to the brain via a compact optic nerve cable, whereas the invertebrate everted retina instead has a massive set of axon bundles connecting an optic lobe directly to the back of the eye, impeding free rotation. This advantage scales with eye/retina size.
The benefit of the inverted retina doesn’t scale with size. It decreases with size.
Amount of retina scales like r^2, while amount of eyeball to put neurons in scales like r^3. This means that the smaller you are, the harder it is to find space to put neurons, while the bigger you are, the easier it is. This is why humans have eyeballs full of not-so-functional vitreous humor, while the compound eyes of insects are packed full of optical neurons.
Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind—if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal?
The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead. Or if you’re an octopus, the obvious thing to do is to put some pre-processing neurons behind the retina. But these changes are tricky to evolve as a series of small mutations (the octopus eye changes less so—maybe they have hidden advantages to their architecture). And they’re only metabolically cheap for large-eyed, large-bodied creatures—early vertebrates didn’t have all this free space that we do.
The benefit of the inverted retina doesn’t scale with size. It decreases with size
It’s the advantage of compression reduction that generally scales with size/resolution due to the frequency power spectrum of natural images.
The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead.
Obvious perhaps, but also wrong, it has no ultimate advantage.
Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind—if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal?
Evidence for near-optimality of inverted retina is not directly evidence for sub-optimality of everted retina: it could just be that either design can overcome tradeoffs around the inversion/eversion design choice.
Here’s what I’d agree with: Specific cell functions are near a local optimum of usefulness, in terms of small changes to DNA that could have been supported against mutation with the fraction of the selection budget that was allocated to those functions in the ancestral environment.
This formulation explains why human scurvy is allowed—producing vitamin C was unimportant in our ancestral environment, so the gene for it was allowed to degrade. And it doesn’t fault us for not using fusion to extract energy from food—there’s no small perturbation to our current digestive tract that starts a thermonuclear reaction.
It’s probably just wrong. For a trivial disproof : I will assume as stated that human neurons are at the Landauer limit.
Well we know from measurements and other studies that nerve cells are unreliable. This failure to fire, exhausting their internal fuel supply so they stop pulsing when they should, all the numerous ways the brain makes system level errors, and the slow speed of signaling mean as a system the brain is nowhere close to optimal. (I can provide sources for all claims) That Landauer limit is for error free computations. When you inject random errors you lose information and system precision, and thus a much smaller error free system would be equal in effectiveness to the brain.
This is likely why we are hitting humanlike performance in many domains with a small fraction of the estimated compute and memory of a brain.
Also when you talk about artificial systems: human brain has no expansion ports, upload or download interfaces, any way to use a gigawatt of power to solve more difficult problems, etc.
So even if we could never do better for the 20 watts the brain uses, in practice that doesn’t matter.
No animals do nuclear fusion to extract energy from their food, meaning that they’re about 11 orders of magnitude off from the optimal use of matter.
The inverted vs. everted retina thing is interesting, and it makes sense that there are space-and-mass-saving advantages to putting neurons inside the eye, especially if your retinas are a noticeable fraction of your weight (hence the focus on “small, highly-visual species”). But it seems like for humans in particular having an everted retina would likely be better “The results from modelling nevertheless indicate clearly that the inverted retina offers a space-saving advantage that is large in small eyes and substantial even in relatively large eyes. The advantage also increases with increasingly complex retinal processing and thus increasing retinal thickness. [...] Only in large-eyed species, the scattering effect of the inverted retina may indeed pose a disadvantage and the everted retina of cephalopods may be superior, although it also has its problems.” (Kroger and Biehlmaher 20019)
But anyhow, which way around my vs. octopuses’ retinas are isn’t that big a mistake either way—certainly not an order of magnitude.
To get that big of an obvious failure you might have to go to more extreme stuff like the laryngeal nerve of the giraffe. Or maybe scurvy in humans.
Overall, [shrug]. Evolution’s really good at finding solutions but it’s really path-dependent. I expect it to be better than human engineering in plenty of ways, but there are plenty of ways the actual global optimum is way too weird to be found by evolution.
That isn’t directly related to any of the claims I made, which specifically concerned the thermodynamic efficiency of cellular computations, the eye, and the brain.
Nuclear fusion may simply be impossible to realistically harness by a cell sized machine self assembled out of common elements.
Hence why it’s an answer to a question called “Does biology reliably find the global maximum, or at least get close?” :P
By analogy, I think it is in fact correct for brains as well. Brains don’t use quantum computing or reversible computing, so they’re very far from the global optimum use of matter for computation. Those are also hard if not impossible to realistically harness with something made out of living cells.
Neither of the alternatives have been proven to work at scale though?
In fact there are still theoretical hurdles for a human brain-size implementation in either case that have not been fully addressed in the literature.
Go on, what are some of the theoretical hurdles for a brain-scale quantum computer?
Interconnections between an enormous number of qubits?
If you’re talking about decoherence issues, that’s solvable with error correcting codes, and we now have a proof that it’s possible to completely solve the decoherence problem via quantum error correcting codes.
Link to article here:
https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/
Link to study:
https://arxiv.org/abs/2111.03654
I’m referring to the real world engineering problem that interconnection requirements scale exponentially with the number of qubits. There simply isn’t enough volume to make it work beyond an upper threshold limit of qubits, since they also have to be quite close to each other.
It’s not at all been proven what this upper limit is or that it allows for capabilities matching or exceeding the average human brain.
If the size is scaled down to reduce the distances another problem arises in that there’s a maximum limit to the amount of power that can be supplied to any unit volume, especially when cryogenic cooling is required, as cooling and refrigeration systems cannot be perfectly efficient.
Something with 1/100th the efficiency of the human brain and the same size might work, i.e. 2kW instead of 20 watts.
But something with 1/1000000 the efficiency of the human brain and the same size would never work. Since it’s impossible for 20MW of power to be supplied to such a concentrated volume while cooling away the excess heat sufficiently. That is a hard thermodynamic limit.
There is the possibility of the qubits being spread around quite a bit farther from each other, i.e. in a room-size space, but that goes back to the first issue as it brings exponentially increasing losses, from such things as signalling issues. Which may be partially mitigated by improvements from such things as error correcting codes. But there cannot exist a ‘complete’ solution.
As perfectly lossless information transmission is only an ideal and not achievable in practice.
One of the bigger problems that was solved recently is error correction. Without actively cooling things down, quantum computers need error correction, and it used to be a real issue.
However, this was solved a year ago, at least in theory.
It also solves the decoherence problem, which allows in theory room temperature computers. It’s at least a possibility proof.
The article’s link is here:
https://www.quantamagazine.org/qubits-can-be-as-safe-as-bits-researchers-show-20220106/
And the actual paper is here:
https://arxiv.org/abs/2111.03654
Other than that, the problems are all practical.
Oh, cool! I’m not totally clear on what this means—did things like the toric code provide error correction in a linear number of extra steps, while this new result paves the way for error correction in a logarithmic number of extra steps?
Basically, the following properties hold for this code (I’m trusting quanta magazine to report the study correctly)
It is efficient like classical codes.
It can correct many more errors than previous codes.
It has constant ability to suppress errors, no matter how large the sequence of bits you’ve started with.
It sums up a very low number of bits/qubits, called the LDPC property in the quanta article.
It has local testability, that is errors can’t hide themselves, and any check can reveal a large proportion of errors, evading Goodhart’s Law.
Yeah, that’s the big one for brains. I might answer using a similar example soon, but that might be a big one, as provisionally the latter has 35 more orders of magnitude worth of computing power.
one might say you’re talking about costreg foom, not kardashev foom
Even here, that doesn’t apply to quantum/reversible computers, or superconducting wires.
The article I linked argues that the inverted retina is near optimal, if you continue reading . ..
The scattering effects are easily compensated for:
The everted retina has an issue :
The inverted retina with internal space for extensive retinal circuitry performs high efficiency video compression (roughly equivalent to H.264 in compression rate), which enormously reduces the space and energy expensive wiring requirement for video output to the brain via a compact optic nerve cable, whereas the invertebrate everted retina instead has a massive set of axon bundles connecting an optic lobe directly to the back of the eye, impeding free rotation. This advantage scales with eye/retina size.
The benefit of the inverted retina doesn’t scale with size. It decreases with size.
Amount of retina scales like r^2, while amount of eyeball to put neurons in scales like r^3. This means that the smaller you are, the harder it is to find space to put neurons, while the bigger you are, the easier it is. This is why humans have eyeballs full of not-so-functional vitreous humor, while the compound eyes of insects are packed full of optical neurons.
Yes, cephalopods also have eye problems. In fact, this places you in a bit of a bind—if evolution is so good at making humans near-optimal, why did evolution make octopus eyes so suboptimal?
The obvious thing to do is to just put the neurons that are currently in front of the retina in humans behind the retina instead. Or if you’re an octopus, the obvious thing to do is to put some pre-processing neurons behind the retina. But these changes are tricky to evolve as a series of small mutations (the octopus eye changes less so—maybe they have hidden advantages to their architecture). And they’re only metabolically cheap for large-eyed, large-bodied creatures—early vertebrates didn’t have all this free space that we do.
It’s the advantage of compression reduction that generally scales with size/resolution due to the frequency power spectrum of natural images.
Obvious perhaps, but also wrong, it has no ultimate advantage.
Evidence for near-optimality of inverted retina is not directly evidence for sub-optimality of everted retina: it could just be that either design can overcome tradeoffs around the inversion/eversion design choice.
How do you view the claim that human cells are near a critical upper limit?
Here’s what I’d agree with: Specific cell functions are near a local optimum of usefulness, in terms of small changes to DNA that could have been supported against mutation with the fraction of the selection budget that was allocated to those functions in the ancestral environment.
This formulation explains why human scurvy is allowed—producing vitamin C was unimportant in our ancestral environment, so the gene for it was allowed to degrade. And it doesn’t fault us for not using fusion to extract energy from food—there’s no small perturbation to our current digestive tract that starts a thermonuclear reaction.
It’s probably just wrong. For a trivial disproof : I will assume as stated that human neurons are at the Landauer limit.
Well we know from measurements and other studies that nerve cells are unreliable. This failure to fire, exhausting their internal fuel supply so they stop pulsing when they should, all the numerous ways the brain makes system level errors, and the slow speed of signaling mean as a system the brain is nowhere close to optimal. (I can provide sources for all claims) That Landauer limit is for error free computations. When you inject random errors you lose information and system precision, and thus a much smaller error free system would be equal in effectiveness to the brain.
This is likely why we are hitting humanlike performance in many domains with a small fraction of the estimated compute and memory of a brain.
Also when you talk about artificial systems: human brain has no expansion ports, upload or download interfaces, any way to use a gigawatt of power to solve more difficult problems, etc.
So even if we could never do better for the 20 watts the brain uses, in practice that doesn’t matter.