You obviously didn’t read the post as indeed it discusses this—see the section on size and temperature.
That point (compute energy/system surface area) assumes we can’t drop clock speed. If cooling was the binding constraint, drop clock speed and now we can reap gains in eficiency from miniaturization.
Heat dissipation scales linearly with size for a constant ΔT. Shrink a device by a factor of ten and the driving thermal gradient increases in steepness by ten while the cross sectional area of the material conducting that heat goes down by 100x. So if thermals are the constraint, then scaling linear dimensions down by 10x requires reducing power by 10x or switching to some exotic cooling solution (which may be limited in improvement OOMs achievable).
But if we assume constant energy per bit*(linear distance), reducing wire length by 10x cuts power consumption by 10x. Only if you want to increase clock speed by 10x (since propagation velocity is unchanged and signal travel less distance). Does power go back up. In fact wire thinning to reduce propagation speed gets you a small amount of added power savings.
All that assumes the logic will shrink which is not a given.
Added points regarding cooling improvements:
brain power density of 20mW/cc is quite low.
ΔT is pretty small (single digit °C)
switching to temperature tolerant materials for higher ΔT gives (1-1.5 OOM)
phase change cooling gives another 1 OOM
Increasing pump power/coolant volume is the biggie since even a few Mpa is doable without being counterproductive or increasing power budget much (2-3 OOM)
even if cooling is hard binding, if interconnect density increases, can downsize a bit less and devote more volume to cooling.
The brain is already at minimal viable clock rate.
Your comment now seems largely in agreement: reducing wire length 10x cuts interconnect power consumption by 10x but surface area decreases 100x so surface power density increases 10x. That would result in a 3x increase in temp/cooling demands which is completely unviable for a bio brain constrained to room temp and already using active liquid cooling and the entire surface of the skin as a radiator.
Digital computers of course can—and do—go much denser/hotter, but that ends up ultimately costing more energy for cooling.
So anyway the conclusion of that section was:
Conclusion: The brain is perhaps 1 to 2 OOM larger than the physical limits for a computer of equivalent power, but is constrained to its somewhat larger than minimal size due in part to thermodynamic cooling considerations.
What sets the minimal clock rate? Increasing wire resistance and reducing the number of ion channels and pumps proportionally should just work. (ignoring leakage).
It is certainly tempting to run at higher clock speeds (serial thinking speed is a nice feature) but if miniaturization can be done and then clock speeds must be limited for thermal reasons why can’t we just do that?
That aside, is miniaturization out of the question (IE:logic won’t shrink)? Is there a lower limit on number of charge carriers for synapses to work?
Synapses are around 1µm³ which seems big enough to shrink down a bit without weird quantum effects ruining everything. Humans have certainly made smaller transistors or memristors for that matter. Perhaps some of the learning functionality needs to be stripped but we do inference on models all the time without any continuous learning and that’s still quite useful.
Evolutionary arms races: ie the need to think quickly to avoid becoming prey, think fast enough to catch prey, etc.
That aside, is miniaturization out of the question (IE:logic won’t shrink)? Is there a lower limit on number of charge carriers for synapses to work?
The prime overall size constraint seems may be surface/volume ratios and temp as we already discussed, but yes synapses are already pretty minimal for what they do (they are analog multipliers and storage devices).
Synapses are equivalent to entire multipliers + storage devices + some extra functions, far more than transistors.
That point (compute energy/system surface area) assumes we can’t drop clock speed. If cooling was the binding constraint, drop clock speed and now we can reap gains in eficiency from miniaturization.
Heat dissipation scales linearly with size for a constant ΔT. Shrink a device by a factor of ten and the driving thermal gradient increases in steepness by ten while the cross sectional area of the material conducting that heat goes down by 100x. So if thermals are the constraint, then scaling linear dimensions down by 10x requires reducing power by 10x or switching to some exotic cooling solution (which may be limited in improvement OOMs achievable).
But if we assume constant energy per bit*(linear distance), reducing wire length by 10x cuts power consumption by 10x. Only if you want to increase clock speed by 10x (since propagation velocity is unchanged and signal travel less distance). Does power go back up. In fact wire thinning to reduce propagation speed gets you a small amount of added power savings.
All that assumes the logic will shrink which is not a given.
Added points regarding cooling improvements:
brain power density of 20mW/cc is quite low.
ΔT is pretty small (single digit °C)
switching to temperature tolerant materials for higher ΔT gives (1-1.5 OOM)
phase change cooling gives another 1 OOM
Increasing pump power/coolant volume is the biggie since even a few Mpa is doable without being counterproductive or increasing power budget much (2-3 OOM)
even if cooling is hard binding, if interconnect density increases, can downsize a bit less and devote more volume to cooling.
The brain is already at minimal viable clock rate.
Your comment now seems largely in agreement: reducing wire length 10x cuts interconnect power consumption by 10x but surface area decreases 100x so surface power density increases 10x. That would result in a 3x increase in temp/cooling demands which is completely unviable for a bio brain constrained to room temp and already using active liquid cooling and the entire surface of the skin as a radiator.
Digital computers of course can—and do—go much denser/hotter, but that ends up ultimately costing more energy for cooling.
So anyway the conclusion of that section was:
What sets the minimal clock rate? Increasing wire resistance and reducing the number of ion channels and pumps proportionally should just work. (ignoring leakage).
It is certainly tempting to run at higher clock speeds (serial thinking speed is a nice feature) but if miniaturization can be done and then clock speeds must be limited for thermal reasons why can’t we just do that?
That aside, is miniaturization out of the question (IE:logic won’t shrink)? Is there a lower limit on number of charge carriers for synapses to work?
Synapses are around 1µm³ which seems big enough to shrink down a bit without weird quantum effects ruining everything. Humans have certainly made smaller transistors or memristors for that matter. Perhaps some of the learning functionality needs to be stripped but we do inference on models all the time without any continuous learning and that’s still quite useful.
Signal propagation is faster in larger axons.
Evolutionary arms races: ie the need to think quickly to avoid becoming prey, think fast enough to catch prey, etc.
The prime overall size constraint seems may be surface/volume ratios and temp as we already discussed, but yes synapses are already pretty minimal for what they do (they are analog multipliers and storage devices).
Synapses are equivalent to entire multipliers + storage devices + some extra functions, far more than transistors.
you might find this post interesting