That model of wire energy sounds so non-physical I had to look it up.
(It reminds me of the billiard ball model of electrons. If you think of electrons as billiard balls, it’s hard to figure out how metals work, because it seems like the electrons will have a hard time getting through all the atoms that are in the way—there’s too much bouncing and jostling. But if electrons are waves, they can flow right past all the atoms as long as they’re a crystal lattice, and suddenly it’s dissipation that becomes the unusual process that needs explanation.)
So I looked through your references but I couldn’t find any mention of this formula. Not that I would have been shocked if I did—semiconductor engineers do all sorts of weird stuff that condensed matter physicists wouldn’t. But anyhow, I’m pretty sure that there’s no way the minimum energy dissipation in wires scales the way you say it does, and I’m curious if you have some authoritative source.
We can imagine several kinds of losses: radiative losses from high-frequency activity, resistive losses from moving lots of current, and irreversible capacitive losses from charging and discharging wires. I actually am pretty sure that the first two are smaller than the irreversible capacitive loss, and there are some nice excuses to ignore them: radiative losses might affect chips a little but there’s no way the brain cares about them, and there’s no way that resistive losses are going to have a basis in information theory because superconductors exist.
So, capacitance of wires! Capacitor energy is QV/2, or CV^2/2. Let’s make a spherical cow assumption that all wires in a chip are half as capacitive as ideal coax cables, and the dielectric is the same thickness as the wires. Then the capacitance is about 1.3*10^-10 Farads/m (note: this drops as you make chips bigger, but only logarithmically). So for 1V wires, capacitive energy is about 7*10^-11 J/m per discharge (70 fJ/mm, a close match to the number you cite!).
But look at the scaling—it’s V^2! Not controlled by Landauer.
Anyhow I felt like there were several things like this in the post, but this is the one I decided to do a relatively deep dive on.
FWIW I am also a physicist and the interconnect energy discussion also seemed wrong to me, but I hadn’t bothered to look into it enough to comment.
I attended a small conference on energy-efficient electronics a decade ago. My memory is a bit hazy, but I believe Eli Yablonovitch (who I tend to trust on these kinds of questions) kicked off with an overview of interconnect losses (and paths forward), and for normal metal wire losses he wrote down the 12CV2 formula derived from charging and discharging the (unintentional / stray) capacitors between the wires and other bits of metal in their vicinity. Then he talked about various solutions like various kinds of low-V switches (negative-capacitance voltage-amplifying transistors, NEMS mechanical switches, quantum tunneling transistors, etc.), and LED+waveguide optical interconnects (e.g. this paper).
It seems from the replies to the parent comment that the 12CV2 formula is close to the OP formula. Score one for dimensional analysis, I guess, or else the OP formula has a justification that I’m not following.
I’m fairly confident now the Landuaer Tile model is correct (based in part on how closely it predicts the spherical capacitance based wire energy in this comment).
It is fundamental because every time the carrier particles transmit information to the next wire segment, they also inadvertently and unavoidably exchange some information with the outside environment, thus leaking some energy (waste heat) and or introducing some noise. The easiest way to avoid this is to increase the distance carrier particles transmit a bit before interactions—as in optical communication with photons that can travel fairly large distances before interacting with anything (in free space that distance can be almost arbitrarily large, whereas in a fiber optic cable it is only a number of OOM larger than the electron wavelength). But that is basically impossible for dense on-chip interconnect. So the only other option there is fully reversible circuits+interconnects.
So I predict none of those solutions you mention will escape the Landauer bound for dense on-chip interconnect, unless they somehow involve reversible circuits. Low voltage doesn’t change anything (the brain uses near minimal voltages close to the Landauer limit but still is bound by the Landauer wire energy), NEMS mechanical switches can’t possibly escape the bound, and optical communication has a more generous bound but is too large as mentioned.
So the cool thing is the Landauer/Lego model is very general and very simple. I wanted a model that made reasonably accurate predictions, but was extremely simple. I believe I succeeded. More complex electrical and wire geometry equations do not in fact make more accurate predictions for my target variables of interest, and are vastly more complex. The number of successful predictions this model makes more or less proves its correctness, in a bayesian sense.
So for 1V wires, capacitive energy is about 7*10^-11 J/m per discharge (70 fJ/mm, a close match to the number you cite!).
Yep! and it’s even more accurate if you use the correct De Broglie electron wavelength at 1V, which is 1.23nm instead of 1nm, which then gives 81 fJ/mm. I bet there are a few other adjustments like that, but it’s already pretty close.
But look at the scaling—it’s V^2! Not controlled by Landauer.
Not really, as you point out the E = (1/2)QV = (1/2)CV^2, and Q=CV. But notice there is a minimum value of the charge Q = 1 electron charge, and a minimal value constraint on the energy per wire segment, E > Emin, thus V is constrained as well—it can not scale arbitrarily. You can use more electrons to represent the signal of course (larger wire) and lower voltage at the same Emin per segment, but there is a room temp background Landauer noise voltage around 17 mV, need a non trivial multiple of that.
The macro wire formulas are just approximations btw, for minimal nano-scale systems we are talking about single or few electrons ( I believe the spherical cow model breaks down for nano-scale wires )
The minimal element model comes from Cavin/Zhirnovdirect et al ( starting page 1725, 5th or 6th ref to ‘interconnect’, the tile model), I ref it a few times in the article. They explicitly use it for calculating minimal transistor switch energies that include minimal wires, estimate wire distances, etc, and use it to forecast end of Moore’s Law.
Communication at the nanoscale is still just a form of computation (a 1:1 function, but still erases unwanted wire states), and if it’s irreversible the Landauer Limit very much applies.
Ah, good point. But I’m still not sure the model works, because we can distribute the charge (or more generally the degrees of freedom) over the length of the wire.
Like, if the wire is only 10 nm long, adding one electron causes a way bigger voltage jump than if the wire is 500 nm long. We don’t have to add one electron per segment of wire.
I think you are correct in that you don’t actually have to have 1 electron per electron-radius (~nm) of wire—you could have a segment that is longer, but I think if you work that out it requires larger voltages to actually function correctly in terms of reliable transmission. This is all assuming we are using electron waves to transmit information, rather than ballistic electrons (but the Landauer limit will still bound the latter, just in a different way).
If you look at the spherical cow (concentric cylinder wire model), for smallish wires it reduces effectively to a constant that relates distance to capacitance, with units farads/meter.
The Landauer/Tile model predicts in advance a natural value of this parameter will be 1 electron charge per 1 volt per 1 electron radius, ie 1.602 e-19 F / 1.23 nm, or 1.3026 e-10 F/m.
So, capacitance of wires! Capacitor energy is QV/2, or CV^2/2. Let’s make a spherical cow assumption that all wires in a chip are half as capacitive as ideal coax cables, and the dielectric is the same thickness as the wires. Then the capacitance is about 1.3*10^-10 Farads/m (note: this drops as you make chips bigger, but only logarithmically).
The probability that the Landauer/Tile model predicts the same capacitance per unit distance while not also somehow representing the same fundamental truth of nature, is essentially epsilon. Somehow the spherical wire capacitance model and the spherical tile electron radius Landauer/Tile model are the same.
I think this is wrong. The landauer limit applies to bit operations, not moving information, the fact that optical signalling has no per distance costs should be suggestive of this.(Edit:reversibility does change things but can be approached by reducing clock speed which in the limit gives zero effective resistance.)
My guess is that wire energy per unit length is similar because wires tend to have a similar optimum conductor:insulation diameter ratios leading to relatively consistent capacitances per unit length.
Concretely, if you have a bunch of wires packed in a cable and want to reduce wire to wire capacitance to reduce C*V² energy losses, putting the wires further apart does this. This is not practical because it limits wires/cm² (cross sectional interconnect density) but the same thing can be done with more conductive materials. EG:switching from saltwater (0.5 S) to copper (50 MS) for a 10^8 increase in conductivity
Capacitance of a wire with cylindrical insultion is proportional to “1/ln(Di/Do)”. For a myelinated neuron with a 1:2 saltwater:sheath diameter ratio (typical) switching to copper allows a reduction in diameter of 10^4 x for the same resistance per unit length. This change leads to a 14x reduction in capacitance ((1/ln(2/1))/(1/ln(20000/1))=(ln(20000)/ln(2))=14.2). This is even more significant for wires with thinner insulation (EG:grey matter) ((ln(11000)/ln(1.1))=97.6)
A lot of the capacitance in a myelinated neuron is in the unmyelinated nodes but we can now place them further apart. Though to do this we have to keep resistance between nodes the same. Instead of a 10^4 reduction in wire area we do 2700 x leading to a 12.5x reduction in unit capacitance and resistance. Nodes can now be placed 12.5x apart for a 12.5x total reduction in energy.
This is not the optimum design. If your wires are tiny hairs in a sea of insulation, consider packing everything closer together. With wires 10000x smaller length reductions on that scale would follow. leading to a 10′000x reduction in switching energy. At some point quantum shenanigans ruin your day, but a brain like structure should be achievable with energy consumption 100-1000 x lower.
Practically, putting copper into cells ends badly, also there’s the issue of charge carrier compatibility, In In neurons, in addition to acting as charge carriers sodium and potassium have opposing concentration gradients which act as energy storage to power spiking. Copper uses electrons as charge carriers so there would have to be electrodes to adsorb/desorb aqueous charge carriers and exchange them for electrons in the copper. In practice it might be easier to switch to having +ve and -ve supply voltage connections and make the whole thing run on DC power like current computer chips do. This requires swapping out the voltage gated ion channels for something else.
Computers have efficiencies similar to the brain despite having much more conductive wire materials mostly because they are limited to packing their transistors on a 2d surface.
add more layers (even with relatively coarse inter connectivity) and energy efficiency goes up.
Power consumption for equivalent performance was 46%. That suggests that power consumption in modern chips is driven by overly long wires resulting from lack of a 3rd dimension. I remember but can’t find any papers on use of more than 2 layers. There’s issues there because layer to layer connectivity sucks. Die to die interconnect density is much lower than transistor density so efficiency gains don’t scale that well past 5 layers IIRC.
Both brains and current semiconductor chips are built on dissipative/irreversible wire signaling, and are mostly interconnect by volume
That’s exactly what I meant. Thin wires inside a large amount of insulation is sub optimal.
When using better wire materials, rather than reduce capacitance per unit length, interconnect density can be increased (more wires per unit area) and then the entire design compacted. Higher capacitance per wire unit length than the alternative but much shorter wires leading to overall lower switching energy.
This is why chips and brains are “mostly interconnect by volume” because building them any other way is counterproductive.
The scenario I outlined while sub optimal shows that in white matter there’s an OOM to be gained even in the case where wire length cannot be decreased (EG:trying to further fold the grey matter locally in the already very folded cortical surface.) In cases where white matter interconnect density was limiting and further compaction is possible you could cut wire length for more energy/power savings and that is the better design choice.
It sure looks like that could be possible in the above image. There’s a lot of white matter in the middle and another level of even coarser folding could be used to take advantage of interconnect density increases.
Really though increasing both white and grey matter density until you run up against hard limits on shrinking the logic elements (synapses) would be best.
Brain interconnect already approaches the landauer limit for irreversible signalling, so changing out materials makes no difference unless you can also shrink the volume to reduce lengths, but as discussed in the section on density & temperature, the brain is also density bound based on the limits of heat transfer to the surface of the skin as a radiator.
Yeah, this is wrong. The landauer limit applies to bit operations, not moving information, the fact that optical signalling has no per distance costs should be suggestive of this.
Optical signalling is reversible—as discussed in the article, if you had only read it.
The discussion is about nanowires for interconnect. The Landauer model correctly predicted—in advance—a nanowire capacitance of 1 electron charge per 1 volt per 1 electron radius, ie 1.602 e-19 F / 1.23 nm, or 1.3026 e-10 F/m. This is near exactly the same as the capacitance wire spherical cow model:
assumption that all wires in a chip are half as capacitive as ideal coax cables, and the dielectric is the same thickness as the wires. Then the capacitance is about 1.3*10^-10 Farads/m (note: this drops as you make chips bigger, but only logarithmically).
“and the dielectric is the same thickness as the wires.” is doing the work there. It makes sense to do that if You’re packing everything tightly but with an 8 OOM increase in conductivity we can choose to change the ratio (by quite a lot) in the existing brain design. In a clean slate design you would obviously do some combination of wire thinning and increasing overall density to reduce wire length.
The figures above show that (ignoring integration problems like copper toxicity and NA/K vs e- charge carrier differences) Assuming you do a straight saltwater to copper swap in white matter neurons and just change the core diameter (replacing most of it with insulation), energy/switch event goes down by 12.5x.
I’m pretty sure for non-superconductive electrical interconnects the reliability is set by the Johnson-Nyquist_noise and figuring out the output noise distribution for an RC transmission line is something I don’t feel like doing right now. Worth noting is that the above scenario preserves the R:C ratio of the transmission line (IE: 1 ohm worth of line has the same distributed capacitance) so thermal noise as seen from the end should be unchanged.
The brain is already close to the landauer limit for irreversible interconnect in terms of energy per bit per nm; swapping out materials is irrelevant.
Consider a Nanoelectromechanical relay. These are usually used for RF switching so switching voltage isn’t important, but switching voltage can be brought arbitrarily low. Mass of the cantilever determines frequency response. A NEMR with a very long light low-stiffness cantilever could respond well at 20khz and be sensitive to thermal noise. Adding mass to the end makes it less sensitive to transients (lower bandwidth, slower response) without affecting switching voltage.
In a NEMS computer there’s the option of dropping (stiffness, voltage, operating frequency) and increasing inertia (all proportionally) which allows for quadratic reductions in power consumption.
IE: Moving closer to the ideal zero effective resistance by taking clock speed to zero.
The bit erasure Landauer limit still applies but we’re ~10^6 short of that right now.
Caveats:
NEM relays currently have limits to voltage scaling due to adhesion. Assume the hypothetical relay has a small enough contact point that thermal noise can unstick it. Operation frequency may have to be a bit lower to wait for this to happen.
That model of wire energy sounds so non-physical I had to look it up.
(It reminds me of the billiard ball model of electrons. If you think of electrons as billiard balls, it’s hard to figure out how metals work, because it seems like the electrons will have a hard time getting through all the atoms that are in the way—there’s too much bouncing and jostling. But if electrons are waves, they can flow right past all the atoms as long as they’re a crystal lattice, and suddenly it’s dissipation that becomes the unusual process that needs explanation.)
So I looked through your references but I couldn’t find any mention of this formula. Not that I would have been shocked if I did—semiconductor engineers do all sorts of weird stuff that condensed matter physicists wouldn’t. But anyhow, I’m pretty sure that there’s no way the minimum energy dissipation in wires scales the way you say it does, and I’m curious if you have some authoritative source.
We can imagine several kinds of losses: radiative losses from high-frequency activity, resistive losses from moving lots of current, and irreversible capacitive losses from charging and discharging wires. I actually am pretty sure that the first two are smaller than the irreversible capacitive loss, and there are some nice excuses to ignore them: radiative losses might affect chips a little but there’s no way the brain cares about them, and there’s no way that resistive losses are going to have a basis in information theory because superconductors exist.
So, capacitance of wires! Capacitor energy is QV/2, or CV^2/2. Let’s make a spherical cow assumption that all wires in a chip are half as capacitive as ideal coax cables, and the dielectric is the same thickness as the wires. Then the capacitance is about 1.3*10^-10 Farads/m (note: this drops as you make chips bigger, but only logarithmically). So for 1V wires, capacitive energy is about 7*10^-11 J/m per discharge (70 fJ/mm, a close match to the number you cite!).
But look at the scaling—it’s V^2! Not controlled by Landauer.
Anyhow I felt like there were several things like this in the post, but this is the one I decided to do a relatively deep dive on.
FWIW I am also a physicist and the interconnect energy discussion also seemed wrong to me, but I hadn’t bothered to look into it enough to comment.
I attended a small conference on energy-efficient electronics a decade ago. My memory is a bit hazy, but I believe Eli Yablonovitch (who I tend to trust on these kinds of questions) kicked off with an overview of interconnect losses (and paths forward), and for normal metal wire losses he wrote down the 12CV2 formula derived from charging and discharging the (unintentional / stray) capacitors between the wires and other bits of metal in their vicinity. Then he talked about various solutions like various kinds of low-V switches (negative-capacitance voltage-amplifying transistors, NEMS mechanical switches, quantum tunneling transistors, etc.), and LED+waveguide optical interconnects (e.g. this paper).
It seems from the replies to the parent comment that the 12CV2 formula is close to the OP formula. Score one for dimensional analysis, I guess, or else the OP formula has a justification that I’m not following.
I’m fairly confident now the Landuaer Tile model is correct (based in part on how closely it predicts the spherical capacitance based wire energy in this comment).
It is fundamental because every time the carrier particles transmit information to the next wire segment, they also inadvertently and unavoidably exchange some information with the outside environment, thus leaking some energy (waste heat) and or introducing some noise. The easiest way to avoid this is to increase the distance carrier particles transmit a bit before interactions—as in optical communication with photons that can travel fairly large distances before interacting with anything (in free space that distance can be almost arbitrarily large, whereas in a fiber optic cable it is only a number of OOM larger than the electron wavelength). But that is basically impossible for dense on-chip interconnect. So the only other option there is fully reversible circuits+interconnects.
So I predict none of those solutions you mention will escape the Landauer bound for dense on-chip interconnect, unless they somehow involve reversible circuits. Low voltage doesn’t change anything (the brain uses near minimal voltages close to the Landauer limit but still is bound by the Landauer wire energy), NEMS mechanical switches can’t possibly escape the bound, and optical communication has a more generous bound but is too large as mentioned.
So the cool thing is the Landauer/Lego model is very general and very simple. I wanted a model that made reasonably accurate predictions, but was extremely simple. I believe I succeeded. More complex electrical and wire geometry equations do not in fact make more accurate predictions for my target variables of interest, and are vastly more complex. The number of successful predictions this model makes more or less proves its correctness, in a bayesian sense.
Yep! and it’s even more accurate if you use the correct De Broglie electron wavelength at 1V, which is 1.23nm instead of 1nm, which then gives 81 fJ/mm. I bet there are a few other adjustments like that, but it’s already pretty close.
Not really, as you point out the E = (1/2)QV = (1/2)CV^2, and Q=CV. But notice there is a minimum value of the charge Q = 1 electron charge, and a minimal value constraint on the energy per wire segment, E > Emin, thus V is constrained as well—it can not scale arbitrarily. You can use more electrons to represent the signal of course (larger wire) and lower voltage at the same Emin per segment, but there is a room temp background Landauer noise voltage around 17 mV, need a non trivial multiple of that.
The macro wire formulas are just approximations btw, for minimal nano-scale systems we are talking about single or few electrons ( I believe the spherical cow model breaks down for nano-scale wires )
The minimal element model comes from Cavin/Zhirnov direct et al ( starting page 1725, 5th or 6th ref to ‘interconnect’, the tile model), I ref it a few times in the article. They explicitly use it for calculating minimal transistor switch energies that include minimal wires, estimate wire distances, etc, and use it to forecast end of Moore’s Law.
Communication at the nanoscale is still just a form of computation (a 1:1 function, but still erases unwanted wire states), and if it’s irreversible the Landauer Limit very much applies.
Ah, good point. But I’m still not sure the model works, because we can distribute the charge (or more generally the degrees of freedom) over the length of the wire.
Like, if the wire is only 10 nm long, adding one electron causes a way bigger voltage jump than if the wire is 500 nm long. We don’t have to add one electron per segment of wire.
I think you are correct in that you don’t actually have to have 1 electron per electron-radius (~nm) of wire—you could have a segment that is longer, but I think if you work that out it requires larger voltages to actually function correctly in terms of reliable transmission. This is all assuming we are using electron waves to transmit information, rather than ballistic electrons (but the Landauer limit will still bound the latter, just in a different way).
If you look at the spherical cow (concentric cylinder wire model), for smallish wires it reduces effectively to a constant that relates distance to capacitance, with units farads/meter.
The Landauer/Tile model predicts in advance a natural value of this parameter will be 1 electron charge per 1 volt per 1 electron radius, ie 1.602 e-19 F / 1.23 nm, or 1.3026 e-10 F/m.
The probability that the Landauer/Tile model predicts the same capacitance per unit distance while not also somehow representing the same fundamental truth of nature, is essentially epsilon. Somehow the spherical wire capacitance model and the spherical tile electron radius Landauer/Tile model are the same.
I think this is wrong. The landauer limit applies to bit operations, not moving information, t
he fact that optical signalling has no per distance costs should be suggestive of this.(Edit:reversibility does change things but can be approached by reducing clock speed which in the limit gives zero effective resistance.)My guess is that wire energy per unit length is similar because wires tend to have a similar optimum conductor:insulation diameter ratios leading to relatively consistent capacitances per unit length.
Concretely, if you have a bunch of wires packed in a cable and want to reduce wire to wire capacitance to reduce C*V² energy losses, putting the wires further apart does this. This is not practical because it limits wires/cm² (cross sectional interconnect density) but the same thing can be done with more conductive materials. EG:switching from saltwater (0.5 S) to copper (50 MS) for a 10^8 increase in conductivity
Capacitance of a wire with cylindrical insultion is proportional to “1/ln(Di/Do)”. For a myelinated neuron with a 1:2 saltwater:sheath diameter ratio (typical) switching to copper allows a reduction in diameter of 10^4 x for the same resistance per unit length. This change leads to a 14x reduction in capacitance ((1/ln(2/1))/(1/ln(20000/1))=(ln(20000)/ln(2))=14.2). This is even more significant for wires with thinner insulation (EG:grey matter) ((ln(11000)/ln(1.1))=97.6)
A lot of the capacitance in a myelinated neuron is in the unmyelinated nodes but we can now place them further apart. Though to do this we have to keep resistance between nodes the same. Instead of a 10^4 reduction in wire area we do 2700 x leading to a 12.5x reduction in unit capacitance and resistance. Nodes can now be placed 12.5x apart for a 12.5x total reduction in energy.
This is not the optimum design. If your wires are tiny hairs in a sea of insulation, consider packing everything closer together. With wires 10000x smaller length reductions on that scale would follow. leading to a 10′000x reduction in switching energy. At some point quantum shenanigans ruin your day, but a brain like structure should be achievable with energy consumption 100-1000 x lower.
Practically, putting copper into cells ends badly, also there’s the issue of charge carrier compatibility, In In neurons, in addition to acting as charge carriers sodium and potassium have opposing concentration gradients which act as energy storage to power spiking. Copper uses electrons as charge carriers so there would have to be electrodes to adsorb/desorb aqueous charge carriers and exchange them for electrons in the copper. In practice it might be easier to switch to having +ve and -ve supply voltage connections and make the whole thing run on DC power like current computer chips do. This requires swapping out the voltage gated ion channels for something else.
Computers have efficiencies similar to the brain despite having much more conductive wire materials mostly because they are limited to packing their transistors on a 2d surface.
add more layers (even with relatively coarse inter connectivity) and energy efficiency goes up.
Here’s a 2006 article by intel covering the benefits of having two logic layers by stacking two logic dies face to face.
Power consumption for equivalent performance was 46%. That suggests that power consumption in modern chips is driven by overly long wires resulting from lack of a 3rd dimension. I remember but can’t find any papers on use of more than 2 layers. There’s issues there because layer to layer connectivity sucks. Die to die interconnect density is much lower than transistor density so efficiency gains don’t scale that well past 5 layers IIRC.
Also discussed in the article—you are wasting time by not having read it.
That’s exactly what I meant. Thin wires inside a large amount of insulation is sub optimal.
When using better wire materials, rather than reduce capacitance per unit length, interconnect density can be increased (more wires per unit area) and then the entire design compacted. Higher capacitance per wire unit length than the alternative but much shorter wires leading to overall lower switching energy.
This is why chips and brains are “mostly interconnect by volume” because building them any other way is counterproductive.
The scenario I outlined while sub optimal shows that in white matter there’s an OOM to be gained even in the case where wire length cannot be decreased (EG:trying to further fold the grey matter locally in the already very folded cortical surface.) In cases where white matter interconnect density was limiting and further compaction is possible you could cut wire length for more energy/power savings and that is the better design choice.
It sure looks like that could be possible in the above image. There’s a lot of white matter in the middle and another level of even coarser folding could be used to take advantage of interconnect density increases.
Really though increasing both white and grey matter density until you run up against hard limits on shrinking the logic elements (synapses) would be best.
Brain interconnect already approaches the landauer limit for irreversible signalling, so changing out materials makes no difference unless you can also shrink the volume to reduce lengths, but as discussed in the section on density & temperature, the brain is also density bound based on the limits of heat transfer to the surface of the skin as a radiator.
Optical signalling is reversible—as discussed in the article, if you had only read it.
Agreed. My bad.
The discussion is about nanowires for interconnect. The Landauer model correctly predicted—in advance—a nanowire capacitance of 1 electron charge per 1 volt per 1 electron radius, ie 1.602 e-19 F / 1.23 nm, or 1.3026 e-10 F/m. This is near exactly the same as the capacitance wire spherical cow model:
“and the dielectric is the same thickness as the wires.” is doing the work there. It makes sense to do that if You’re packing everything tightly but with an 8 OOM increase in conductivity we can choose to change the ratio (by quite a lot) in the existing brain design. In a clean slate design you would obviously do some combination of wire thinning and increasing overall density to reduce wire length.
The figures above show that (ignoring integration problems like copper toxicity and NA/K vs e- charge carrier differences) Assuming you do a straight saltwater to copper swap in white matter neurons and just change the core diameter (replacing most of it with insulation), energy/switch event goes down by 12.5x.
I’m pretty sure for non-superconductive electrical interconnects the reliability is set by the Johnson-Nyquist_noise and figuring out the output noise distribution for an RC transmission line is something I don’t feel like doing right now. Worth noting is that the above scenario preserves the R:C ratio of the transmission line (IE: 1 ohm worth of line has the same distributed capacitance) so thermal noise as seen from the end should be unchanged.
The brain is already close to the landauer limit for irreversible interconnect in terms of energy per bit per nm; swapping out materials is irrelevant.
Consider trying to do the reverse for computers. Swap copper for saltwater.
You can of course drop operation frequency by 10^8 for a 10-50 hz clock speed. Same energy efficiency.
But you could get added energy efficiency in any design by scaling down the wires to increase resistance/reduce capacitance and reducing clock speed.
In the limit, Adiabatic Computing is reversible because in the limit, moving charge carriers more slowly eliminates resistance.
Thermal noise voltage is proportional to bandwidth. Put another way if the logic element responds slowly enough it see lower noise by averaging.
Consider a Nanoelectromechanical relay. These are usually used for RF switching so switching voltage isn’t important, but switching voltage can be brought arbitrarily low. Mass of the cantilever determines frequency response. A NEMR with a very long light low-stiffness cantilever could respond well at 20khz and be sensitive to thermal noise. Adding mass to the end makes it less sensitive to transients (lower bandwidth, slower response) without affecting switching voltage.
In a NEMS computer there’s the option of dropping (stiffness, voltage, operating frequency) and increasing inertia (all proportionally) which allows for quadratic reductions in power consumption.
IE: Moving closer to the ideal zero effective resistance by taking clock speed to zero.
The bit erasure Landauer limit still applies but we’re ~10^6 short of that right now.
Caveats:
NEM relays currently have limits to voltage scaling due to adhesion. Assume the hypothetical relay has a small enough contact point that thermal noise can unstick it. Operation frequency may have to be a bit lower to wait for this to happen.