OK, I’ll bite on EY’s exercise for the reader, on refuting this “what-if”:
Humbali: Then here’s one way that the minimum computational requirements for general intelligence could be higher than Moravec’s argument for the human brain. Since, after, all, we only have one existence proof that general intelligence is possible at all, namely the human brain. Perhaps there’s no way to get general intelligence in a computer except by simulating the brain neurotransmitter-by-neurotransmitter. In that case you’d need a lot more computing operations per second than you’d get by calculating the number of potential spikes flowing around the brain! What if it’s true? How can you know?
Let’s step back and consider what kind of artifact the brain is. The human brain was “found” by evolution via via a selection process over a rather limited amount of time (between our most recent clearly-dumb ancestor, and anatomically modern humans). We have a local optimization process which optimizes over a relatively short timescale. This process found a brain which implements a generally intelligent algorithm.
In high-dimensional non-convex optimization, we have a way to describe algorithms found by a small amount of local optimization: “not even close to optimal.” (Humans aren’t even at a local optimum for inclusive-genetic-fitness due to our being mesa-optimizers.) But if the brain’s algorithm isn’t optimal, it trivially can’t be the only algorithm that can produce general intelligence. Indeed, I would expect the fact that evolution found our algorithm at all to indicate that there were many possible such algorithms.
There are many generally intelligent algorithms, and our brain only implements one, and it’s just not going to be true that all of the others—or even the ones most likely to be discovered by AI researchers—are only implementable using (simulated) neurotransmitters.
In high-dimensional non-convex optimization, we have a way to describe algorithms found by a small amount of local optimization: “not even close to optimal.”
Does this extend to ‘a bunch of algorithms together’? (I.e. how does ‘the brain does not do everything with a single algorithm’ effect optimality?)
OK, I’ll bite on EY’s exercise for the reader, on refuting this “what-if”:
Let’s step back and consider what kind of artifact the brain is. The human brain was “found” by evolution via via a selection process over a rather limited amount of time (between our most recent clearly-dumb ancestor, and anatomically modern humans). We have a local optimization process which optimizes over a relatively short timescale. This process found a brain which implements a generally intelligent algorithm.
In high-dimensional non-convex optimization, we have a way to describe algorithms found by a small amount of local optimization: “not even close to optimal.” (Humans aren’t even at a local optimum for inclusive-genetic-fitness due to our being mesa-optimizers.) But if the brain’s algorithm isn’t optimal, it trivially can’t be the only algorithm that can produce general intelligence. Indeed, I would expect the fact that evolution found our algorithm at all to indicate that there were many possible such algorithms.
There are many generally intelligent algorithms, and our brain only implements one, and it’s just not going to be true that all of the others—or even the ones most likely to be discovered by AI researchers—are only implementable using (simulated) neurotransmitters.
There’s no strong reason to think the brain does everything with a single algorithm.
Does this extend to ‘a bunch of algorithms together’? (I.e. how does ‘the brain does not do everything with a single algorithm’ effect optimality?)