I think you may be misunderstanding why I used the blackbody temp—I (and the refs I linked) use that as a starting point to to indicate the temp the computing element would achieve without convective cooling (ie in vacuum or outer space).
There’s a pattern here which seems-to-me to be coming up repeatedly (though this is the most legible example I’ve seen so far). There’s a key qualifier which you did not actually include in your post, which would make the claims true. But once that qualifier is added, it’s much more obvious that the arguments are utterly insufficient to back up big-sounding claims like:
Thus even some hypothetical superintelligence, running on non-exotic hardware, will not be able to think much faster than an artificial brain running on equivalent hardware at the same clock rate.
Like, sure, our hypothetical superintelligence can’t build highly efficient compute which runs in space without any external cooling machinery. So, our hypothetical superintelligence will presumably build its compute with external cooling machinery, and then this vacuum limit just doesn’t matter.
You could add all those qualifiers to the strong claims about superintelligence, but then they will just not be very strong claims. (Also, as an aside, I think the wording of the quoted section is not the claim you intended to make, even ignoring qualifiers? The quote is from the speed section, but “equivalent hardware at the same clock rate” basically rules out any hardware speed difference by construction. I’m responding here to the claim which I think you intended to argue for in the speed section.)
Obviously you aren’t gaining thermodynamic efficiency by doing so—you pay extra energy to transport the heat.
Note that you also potentially save energy by running at a lower temperature, since the Landauer limit scales down with temperature. I think it comes out to roughly a wash: operate at 10x lower temperature, and power consumption can drop by 10x (at Landauer limit), but you have to pay 9x the (now reduced) power consumption in work to pump that heat back up to the original temperature. So, running at lower temperature ends up energy-neutral if we’re near thermodynamic limits for everything.
The ‘big-sounding’ claim you quoted makes more sense only with the preceding context you omitted:
Conclusion: The brain is a million times slower than digital computers, but its slow speed is probably efficient for its given energy budget, as it allows for a full utilization of an enormous memory capacity and memory bandwidth. As a consequence of being very slow, brains are enormously circuit cycle efficient. Thus even some hypothetical superintelligence, running on non-exotic hardware, will not be able to think much faster than an artificial brain running on equivalent hardware at the same clock rate.
Because of its slow speed, the brain is super-optimized for intelligence per clock cycle. So digital superintelligences can think much faster, but to the extent they do so they are constrained to be brain-like in design (ultra optimized for low circuit depth). I have a decade old post analyzing/predicting this here, and today we have things like GPT4 which imitate the brain but run 1000x to 10000x times faster during training, and thus accel at writing.
There’s a pattern here which seems-to-me to be coming up repeatedly (though this is the most legible example I’ve seen so far). There’s a key qualifier which you did not actually include in your post, which would make the claims true. But once that qualifier is added, it’s much more obvious that the arguments are utterly insufficient to back up big-sounding claims like:
Like, sure, our hypothetical superintelligence can’t build highly efficient compute which runs in space without any external cooling machinery. So, our hypothetical superintelligence will presumably build its compute with external cooling machinery, and then this vacuum limit just doesn’t matter.
You could add all those qualifiers to the strong claims about superintelligence, but then they will just not be very strong claims. (Also, as an aside, I think the wording of the quoted section is not the claim you intended to make, even ignoring qualifiers? The quote is from the speed section, but “equivalent hardware at the same clock rate” basically rules out any hardware speed difference by construction. I’m responding here to the claim which I think you intended to argue for in the speed section.)
Note that you also potentially save energy by running at a lower temperature, since the Landauer limit scales down with temperature. I think it comes out to roughly a wash: operate at 10x lower temperature, and power consumption can drop by 10x (at Landauer limit), but you have to pay 9x the (now reduced) power consumption in work to pump that heat back up to the original temperature. So, running at lower temperature ends up energy-neutral if we’re near thermodynamic limits for everything.
The ‘big-sounding’ claim you quoted makes more sense only with the preceding context you omitted:
Because of its slow speed, the brain is super-optimized for intelligence per clock cycle. So digital superintelligences can think much faster, but to the extent they do so they are constrained to be brain-like in design (ultra optimized for low circuit depth). I have a decade old post analyzing/predicting this here, and today we have things like GPT4 which imitate the brain but run 1000x to 10000x times faster during training, and thus accel at writing.