Fair enough for pointing out those parts of Brain Efficiency, and I understand the struggle with disclaimers, writing is hard[1].
Seems like there’s a slight chance we still disagree about that remaining 2 OOM? With current technology where we’re stuck with dissipating the entire quantity of energy we used to represent a bit when we erase it, then I agree that it has to be ~1eV. With hypothetical future high-tech reversible computers, then those last 2 OOM aren’t a barrier and erasure costs all the way down to kTlog2 should be possible in principle. I think you and I and Frank and Landauer all agree about that last statement about hypothetical high-tech reversible computers, but if you do happen to disagree, please let me know.
[1] I do still think you could have been way more clear about this when citing it in Contra Yudkowsky, even with only a few words. I also still don’t understand why your reaction to Eliezer saying the difference was 6 OOM, was to conclude that he didn’t understand what he was talking about. It makes sense for you to focus on what’s possible with near-term technology since you’re trying to forecast how compute will scale in the future, but didn’t it occur to you to that maybe Eliezer’s statements did not have a similarly narrow scope?
With hypothetical future high-tech reversible computers, then those last 2 OOM aren’t a barrier and erasure costs all the way down to kTlog2 should be possible in principle. I think you and I and Frank and Landauer all agree about that last statement about hypothetical high-tech reversible computers,
Only if by “possible in principle” you mean using near infinite time, with a strictly useless error probability of 50%. Again Landauer and Frank absolutely do not agree with the oversimplified wikipedia cliff note hot take of kTlog2. See again section 6 “Three sources of error”, and my reply here with details.
I also still don’t understand why your reaction to Eliezer saying the difference was 6 OOM, was to conclude that he didn’t understand what he was talking about. It makes sense for you to focus on what’s possible with near-term technology since you’re trying to forecast how compute will scale in the future, but didn’t it occur to you to that maybe Eliezer’s statements did not have a similarly narrow scope?
It did, but look very carefully at EY’s statement again:
namely that biology is simply not that efficient, and especially when it comes to huge complicated things that it has started doing relatively recently.
ATP synthase may be close to 100% thermodynamically efficient, but ATP synthase is literally over 1.5 billion years old and a core bottleneck on all biological metabolism. Brains have to pump thousands of ions in and out of each stretch of axon and dendrite, in order to restore their ability to fire another fast neural spike. The result is that the brain’s computation is something like half a million times less efficient than the thermodynamic limit for its temperature
The result of half a million times more energy than the limit comes specifically from pumping thousands of ions in and out of each stretch of axon and dendrite. This is irrelevant if you assume hypothetical reversible computing—why is the length of axon/dendrite interconnect relevant if you aren’t dissipating energy for interconnect anyway?
Remember he is claiming that biology is simply not that efficient. He is not comparing it to some exotic hypothetical reversible computer which uses superconducting wires and/or advanced optical interconnects which don’t seem accessible to biology. He is claiming that even within biological constraints of accessible building blocks, biology is not that efficient.
Also remember that EY believes in strong nanotech, and for nanotech in particular reversible computation is irrelevant because all the bottleneck operations: copying nanobot instruction code and nanobot replication—are completely necessarily irreversible. So in that domain we absolutely can determine that biology is truly near optimal in practice, which I already covered elsewhere. EY would probably not be so wildly excited about nanotech if he accepted that biological cells were already operating near the hard thermodynamic limits for nanotech replicators.
Finally the interpretation where EY thinks the 6 OOM come only from exotic future reversible computing is mostly incompatible with his world view that brains/biology is inefficient and AGI nanotech is going to kill us soon—its part of my counter argument.
So is EY wrong about brain efficiency? Or does he agree that the brain is near the efficiency limits of a conventional reversible computer and surpassing it by many OOM requires hypothetical exotic computers? Which is it?
Fair enough for pointing out those parts of Brain Efficiency, and I understand the struggle with disclaimers, writing is hard[1].
Seems like there’s a slight chance we still disagree about that remaining 2 OOM? With current technology where we’re stuck with dissipating the entire quantity of energy we used to represent a bit when we erase it, then I agree that it has to be ~1eV. With hypothetical future high-tech reversible computers, then those last 2 OOM aren’t a barrier and erasure costs all the way down to kTlog2 should be possible in principle. I think you and I and Frank and Landauer all agree about that last statement about hypothetical high-tech reversible computers, but if you do happen to disagree, please let me know.
[1] I do still think you could have been way more clear about this when citing it in Contra Yudkowsky, even with only a few words. I also still don’t understand why your reaction to Eliezer saying the difference was 6 OOM, was to conclude that he didn’t understand what he was talking about. It makes sense for you to focus on what’s possible with near-term technology since you’re trying to forecast how compute will scale in the future, but didn’t it occur to you to that maybe Eliezer’s statements did not have a similarly narrow scope?
Only if by “possible in principle” you mean using near infinite time, with a strictly useless error probability of 50%. Again Landauer and Frank absolutely do not agree with the oversimplified wikipedia cliff note hot take of kTlog2. See again section 6 “Three sources of error”, and my reply here with details.
It did, but look very carefully at EY’s statement again:
The result of half a million times more energy than the limit comes specifically from pumping thousands of ions in and out of each stretch of axon and dendrite. This is irrelevant if you assume hypothetical reversible computing—why is the length of axon/dendrite interconnect relevant if you aren’t dissipating energy for interconnect anyway?
Remember he is claiming that biology is simply not that efficient. He is not comparing it to some exotic hypothetical reversible computer which uses superconducting wires and/or advanced optical interconnects which don’t seem accessible to biology. He is claiming that even within biological constraints of accessible building blocks, biology is not that efficient.
Also remember that EY believes in strong nanotech, and for nanotech in particular reversible computation is irrelevant because all the bottleneck operations: copying nanobot instruction code and nanobot replication—are completely necessarily irreversible. So in that domain we absolutely can determine that biology is truly near optimal in practice, which I already covered elsewhere. EY would probably not be so wildly excited about nanotech if he accepted that biological cells were already operating near the hard thermodynamic limits for nanotech replicators.
Finally the interpretation where EY thinks the 6 OOM come only from exotic future reversible computing is mostly incompatible with his world view that brains/biology is inefficient and AGI nanotech is going to kill us soon—its part of my counter argument.
So is EY wrong about brain efficiency? Or does he agree that the brain is near the efficiency limits of a conventional reversible computer and surpassing it by many OOM requires hypothetical exotic computers? Which is it?