The impression of incuriosity is probably just because I collapsed my thoughts into a few bullet points.
The causal link between human intelligence and neurons is not just because they’re both complicated. My thought process here is something more like:
All instances of human intelligence we are familiar with are associated with a brain.
Brains are built out of neurons.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
It feels pretty plausible that the underlying architecture of brains is important for at least some of the things brains can do. Maybe we will see multiple realizability where similar intelligence can be either built on a brain or on a computer. But we have not (yet?) seen that, even for extremely simple brains.
I think both that we do not know how to build a superintelligence and that if we knew how to model neurons, silicon chips would run it extremely slowly. Both things are missing.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
The impression of incuriosity is probably just because I collapsed my thoughts into a few bullet points.
The causal link between human intelligence and neurons is not just because they’re both complicated. My thought process here is something more like:
All instances of human intelligence we are familiar with are associated with a brain.
Brains are built out of neurons.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
It feels pretty plausible that the underlying architecture of brains is important for at least some of the things brains can do. Maybe we will see multiple realizability where similar intelligence can be either built on a brain or on a computer. But we have not (yet?) seen that, even for extremely simple brains.
I think both that we do not know how to build a superintelligence and that if we knew how to model neurons, silicon chips would run it extremely slowly. Both things are missing.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
https://www.lesswrong.com/posts/cB2Rtnp7DBTpDy3ii/memory-bandwidth-constraints-imply-economies-of-scale-in-ai