Brains do these kinds of things because they run algorithms designed to do these kinds of things.
If by ‘algorithm’, you mean thing-that-does-a-thing, then I think I agree. If by ‘algorithm’, you mean thing-that-can-be-implemented-in-python, then I disagree.
Perhaps a good analogy comes from quantum computing.* Shor’s algorithm is not implementable on a classical computer. It can be approximated by a classical computer, at very high cost. Qubits are not bits, or combinations of bits. They have different underlying dynamics, which makes quantum computers importantly distinct from classical computers.
The claim is that the brain is also built out of things which are dynamically distinct from bits. ‘Chaos’ here is being used in the modern technical sense, not in the ancient Greek sense to mean ‘formless matter’. Low dimensional chaotic systems can be approximated on a classical computer, although this gets harder as the dimensionality increases. Maybe this grounds out in some simple mesoscopic classical system, which can be easily modeled with bits, but it seems likely to me that it grounds out in a quantum system, which cannot.
* I’m not an expert in quantum computer, so I’m not super confident in this analogy.
Different kinds of computers have different operations that are fast versus slow.
On a CPU, performing 1,000,000 inevitably-serial floating point multiplications is insanely fast, whereas multiplying 10,000×10,000 floating-point matrices is rather slow. On a GPU, it’s the reverse.
By the same token, there are certain low-level operations that are far faster on quantum computers than classical computers, and vice-versa. In regards to Shor’s algorithm, of course you can compute discrete logs on classical computers, it just takes exponentially longer than with quantum computers (at least with currently-known algorithms), because quantum computers happen to have an affordance for certain fast low-level operations that lead to calculations of the discrete log.
So anyway, it’s coherent to say that:
Maybe there is some subproblem which is extremely helpful for human-like intelligence, in the same way that calculating discrete logs is extremely helpful for factoring large numbers.
Maybe neurons and collections of neurons have particular affordances which enable blazingly-fast low-level possibly-analog solution of that subproblem. Like, maybe the dynamics of membrane proteins just happens to line up with the thing you need to do in order to approximate the solution to some funny database query thing, or whatever.
…and therefore, maybe brains can do things that would require some insanely large amount of computer chips to do.
…But I don’t think there’s any reason to believe that, and it strikes me as very implausible.
Hmm, I guess I get the impression from you of a general lack of curiosity about what’s going on here under the hood. Like, exactly what kinds of algorithmic subproblems might come up if you were building a human-like intelligence from scratch? And exactly what kind of fast low-level affordances are enabled by collections of neurons, that are not emulate-able by the fast low-level affordances of chips? Do we expect those two sets to overlap or not? Those are the kinds of questions that I’m thinking about. Whereas the vibe I’m getting from your writing—and I could be wrong—is “Human intelligence is complicated, and neurons are complicated, so maybe the latter causes the former, shrug”.
Also, in regards to Shor’s algorithm, long before quantum computers existed, we already knew how to calculate discrete logs, and we already knew that doing so would allow us to factor big numbers. It was just annoyingly slow. By contrast, I do not believe that we already know how to make a superintelligent agent, and we just don’t do it because our chips would do it very slowly. Do you agree? If so, then the thing we’re missing is not “Our chips have a different set of fast low-level affordances than do neurons, and the neuron’s set is better suited to the calculations that we need than the chips’ set.”. Right?
The impression of incuriosity is probably just because I collapsed my thoughts into a few bullet points.
The causal link between human intelligence and neurons is not just because they’re both complicated. My thought process here is something more like:
All instances of human intelligence we are familiar with are associated with a brain.
Brains are built out of neurons.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
It feels pretty plausible that the underlying architecture of brains is important for at least some of the things brains can do. Maybe we will see multiple realizability where similar intelligence can be either built on a brain or on a computer. But we have not (yet?) seen that, even for extremely simple brains.
I think both that we do not know how to build a superintelligence and that if we knew how to model neurons, silicon chips would run it extremely slowly. Both things are missing.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
If by ‘algorithm’, you mean thing-that-does-a-thing, then I think I agree. If by ‘algorithm’, you mean thing-that-can-be-implemented-in-python, then I disagree.
Perhaps a good analogy comes from quantum computing.* Shor’s algorithm is not implementable on a classical computer. It can be approximated by a classical computer, at very high cost. Qubits are not bits, or combinations of bits. They have different underlying dynamics, which makes quantum computers importantly distinct from classical computers.
The claim is that the brain is also built out of things which are dynamically distinct from bits. ‘Chaos’ here is being used in the modern technical sense, not in the ancient Greek sense to mean ‘formless matter’. Low dimensional chaotic systems can be approximated on a classical computer, although this gets harder as the dimensionality increases. Maybe this grounds out in some simple mesoscopic classical system, which can be easily modeled with bits, but it seems likely to me that it grounds out in a quantum system, which cannot.
* I’m not an expert in quantum computer, so I’m not super confident in this analogy.
Different kinds of computers have different operations that are fast versus slow.
On a CPU, performing 1,000,000 inevitably-serial floating point multiplications is insanely fast, whereas multiplying 10,000×10,000 floating-point matrices is rather slow. On a GPU, it’s the reverse.
By the same token, there are certain low-level operations that are far faster on quantum computers than classical computers, and vice-versa. In regards to Shor’s algorithm, of course you can compute discrete logs on classical computers, it just takes exponentially longer than with quantum computers (at least with currently-known algorithms), because quantum computers happen to have an affordance for certain fast low-level operations that lead to calculations of the discrete log.
So anyway, it’s coherent to say that:
Maybe there is some subproblem which is extremely helpful for human-like intelligence, in the same way that calculating discrete logs is extremely helpful for factoring large numbers.
Maybe neurons and collections of neurons have particular affordances which enable blazingly-fast low-level possibly-analog solution of that subproblem. Like, maybe the dynamics of membrane proteins just happens to line up with the thing you need to do in order to approximate the solution to some funny database query thing, or whatever.
…and therefore, maybe brains can do things that would require some insanely large amount of computer chips to do.
…But I don’t think there’s any reason to believe that, and it strikes me as very implausible.
Hmm, I guess I get the impression from you of a general lack of curiosity about what’s going on here under the hood. Like, exactly what kinds of algorithmic subproblems might come up if you were building a human-like intelligence from scratch? And exactly what kind of fast low-level affordances are enabled by collections of neurons, that are not emulate-able by the fast low-level affordances of chips? Do we expect those two sets to overlap or not? Those are the kinds of questions that I’m thinking about. Whereas the vibe I’m getting from your writing—and I could be wrong—is “Human intelligence is complicated, and neurons are complicated, so maybe the latter causes the former, shrug”.
Also, in regards to Shor’s algorithm, long before quantum computers existed, we already knew how to calculate discrete logs, and we already knew that doing so would allow us to factor big numbers. It was just annoyingly slow. By contrast, I do not believe that we already know how to make a superintelligent agent, and we just don’t do it because our chips would do it very slowly. Do you agree? If so, then the thing we’re missing is not “Our chips have a different set of fast low-level affordances than do neurons, and the neuron’s set is better suited to the calculations that we need than the chips’ set.”. Right?
The impression of incuriosity is probably just because I collapsed my thoughts into a few bullet points.
The causal link between human intelligence and neurons is not just because they’re both complicated. My thought process here is something more like:
All instances of human intelligence we are familiar with are associated with a brain.
Brains are built out of neurons.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
It feels pretty plausible that the underlying architecture of brains is important for at least some of the things brains can do. Maybe we will see multiple realizability where similar intelligence can be either built on a brain or on a computer. But we have not (yet?) seen that, even for extremely simple brains.
I think both that we do not know how to build a superintelligence and that if we knew how to model neurons, silicon chips would run it extremely slowly. Both things are missing.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
Not Jeffrey Heninger, but I’d argue a very clear, non-speculative advantage the brain has over the AIs of today have to do with their much better balance between memory and computation operations, and the brain doesn’t suffer from the Von Neumann Bottleneck, because the brain has both way more memory and much better memory bandwidth.
I argued for a memory size between 2.5 petabytes, though even a reduction in this value would still beat out pretty much all modern AI built today.
This is discussed in the post below: Memory bandwidth constraints imply economies of scale in AI inference.
https://www.lesswrong.com/posts/cB2Rtnp7DBTpDy3ii/memory-bandwidth-constraints-imply-economies-of-scale-in-ai