Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
https://www.lesswrong.com/posts/cB2Rtnp7DBTpDy3ii/memory-bandwidth-constraints-imply-economies-of-scale-in-ai