Fwiw I did spotchecked this post at the time, although I did not share at the time (bad priors). Here it goes:
Yes, it’s probably approximately right, but you need to buy it’s the right assumptions. However, these assumptions also make the question somewhat unimportant for EA purposes, because even if the brain is one of the most efficient for its specs, including a few pounds ands watts, you could still believe doom or foom could happen with the same specs, except with a few tons and megawatts, or for some other specs (quantum computers, somewhat soon, or something else, somewhat maybe).
Edit: I just see this somewhat copycat Vaniver’s answer above, so let’s add something more: why I think it’s not the most interesting set of assumptions.
First, to me the brain is primarily optimized for robustness of it’s construction plan over working in a large set of species that all inherit basically the same construction plan, and for multiple (allelic) variants of these plans (with sexual competition, etc). Yes this is probably compatible with optimizing energy in the long run, but not enough to invent rooling balls, if you see what I mean.
Second, and perhaps more important, it assumes that the brain is doing a hard computation. On that, we really have no idea. Like most cognitive neuroscientists from the nineties, I once started a presentation with the widely-accepted trope that our brain is the most complicated thing bla bla. And yes, there are reasons to think that. On the other hand, if resnet-50 can predict most of the variance in neural hemodynamic while viewing the same picture, then maybe current GPU are not that far from the effective computational power of billions of neurons. This shouldn’t be that surprising: after all, biological neurons are not optimized for handling 64 bit-precision and gigahertz clocks.
Follow-up: excellent new material from SB, who provide concrete research avenue for proving physics allow more than what JC assumptions allow. However the most interesting part might be Jacob providing his best (imho) point for why we can’t reject his assumptions so easily.
So I observe the fact that human engineering and biology have ended up on the same pareto surface for interconnect space & energy efficiency—despite being mostly unrelated optimization processes using very different materials—as evidence of a hard pareto surface rather than being mere coincidence.
Very good point indeed, unless someone could explain this coincidence using cheaper assumptions.
Fwiw I did spotchecked this post at the time, although I did not share at the time (bad priors). Here it goes:
Yes, it’s probably approximately right, but you need to buy it’s the right assumptions. However, these assumptions also make the question somewhat unimportant for EA purposes, because even if the brain is one of the most efficient for its specs, including a few pounds ands watts, you could still believe doom or foom could happen with the same specs, except with a few tons and megawatts, or for some other specs (quantum computers, somewhat soon, or something else, somewhat maybe).
Edit: I just see this somewhat copycat Vaniver’s answer above, so let’s add something more: why I think it’s not the most interesting set of assumptions.
First, to me the brain is primarily optimized for robustness of it’s construction plan over working in a large set of species that all inherit basically the same construction plan, and for multiple (allelic) variants of these plans (with sexual competition, etc). Yes this is probably compatible with optimizing energy in the long run, but not enough to invent rooling balls, if you see what I mean.
Second, and perhaps more important, it assumes that the brain is doing a hard computation. On that, we really have no idea. Like most cognitive neuroscientists from the nineties, I once started a presentation with the widely-accepted trope that our brain is the most complicated thing bla bla. And yes, there are reasons to think that. On the other hand, if resnet-50 can predict most of the variance in neural hemodynamic while viewing the same picture, then maybe current GPU are not that far from the effective computational power of billions of neurons. This shouldn’t be that surprising: after all, biological neurons are not optimized for handling 64 bit-precision and gigahertz clocks.
Follow-up: excellent new material from SB, who provide concrete research avenue for proving physics allow more than what JC assumptions allow. However the most interesting part might be Jacob providing his best (imho) point for why we can’t reject his assumptions so easily.
Very good point indeed, unless someone could explain this coincidence using cheaper assumptions.
https://www.lesswrong.com/posts/YihMH7M8bwYraGM8g/my-side-of-an-argument-with-jacob-cannell-about-chip