The fact that the human brain was designed by trial and error is a given. However, we don’t really know how the brain works. It is possible that the brain contains a simple mathematical core, possibly implemented inefficiently and with bugs and surrounded by tonnes of legacy code, but nevertheless responsible for the broad applicability of human intelligence.
Consider the following two views (which might also admit some intermediates):
View A: There exists a simple mathematical algorithm M that corresponds to what we call “intelligence” and that allows solving any problem in some very broad natural domain D.
View B: What we call intelligence is a collection of a large number of unrelated algorithms tailored to individual problems, and there is no “meta-algorithm” that produces them aside from relatively unsophisticated trial and error.
If View B is correct, then we expect that doing trial and error on a collection X of problems will produce an algorithm that solves problems in X and almost only in X. The probability that you were optimizing for X but solved a much larger domain Y is vanishingly small: it is about the same as the probability of a completely random algorithm to solve all problems in Y∖X.
If View A is correct, then we expect that doing trial and error on X has a non-negligible chance of producing M (since M is simple and therefore sampled with a relatively large probability), which would be able to solve all of D.
So, the fact that homo sapiens evolved in a some prehistoric environment but was able to e.g. land on the moon should be surprising to everyone with View B but not surprising to those with View A.
I think the most plausible view is: what we call intelligence is a collection of a large number of algorithms and innovations each of which slightly increases effectiveness in a reasonably broad range of tasks.
To see why both view A and B seem strange to me, consider the analog for physical tasks. You could say that there is a simple core to human physical manipulation which allows us to solve any problem in some very broad natural domain. Or you could think that we just have a ton of tricks for particular manipulation tasks. But neither of those seems right, there is no simple core to the human body plan but at the same time it contains many features which are helpful across a broad range of tasks.
Regarding the physical manipulation analogy: I think that there actually is a simple core to the human body plan. This core is, more or less: a spine, two arms with joints in the middle, two legs with joints in the middle, feet and arms with fingers. This is probably already enough to qualitatively solve more or less all physical manipulation problems humans can solve. All the nuances are needed to make it quantitatively more efficient and deal with the detailed properties of biological tissues, biological muscles et cetera (the latter might be considered analogous to the detailed properties of computational hardware and input/output channels for brains/AGIs).
I think that your view is plausible enough, however, if we focus only on qualitative performance metrics (e.g. time complexity up to a polynomial, regret bound up to logarithmic factors), then this collection probably includes only a small number of innovations that are important.
The fact that the human brain was designed by trial and error is a given. However, we don’t really know how the brain works. It is possible that the brain contains a simple mathematical core, possibly implemented inefficiently and with bugs and surrounded by tonnes of legacy code, but nevertheless responsible for the broad applicability of human intelligence.
Consider the following two views (which might also admit some intermediates):
View A: There exists a simple mathematical algorithm M that corresponds to what we call “intelligence” and that allows solving any problem in some very broad natural domain D.
View B: What we call intelligence is a collection of a large number of unrelated algorithms tailored to individual problems, and there is no “meta-algorithm” that produces them aside from relatively unsophisticated trial and error.
If View B is correct, then we expect that doing trial and error on a collection X of problems will produce an algorithm that solves problems in X and almost only in X. The probability that you were optimizing for X but solved a much larger domain Y is vanishingly small: it is about the same as the probability of a completely random algorithm to solve all problems in Y∖X.
If View A is correct, then we expect that doing trial and error on X has a non-negligible chance of producing M (since M is simple and therefore sampled with a relatively large probability), which would be able to solve all of D.
So, the fact that homo sapiens evolved in a some prehistoric environment but was able to e.g. land on the moon should be surprising to everyone with View B but not surprising to those with View A.
I think the most plausible view is: what we call intelligence is a collection of a large number of algorithms and innovations each of which slightly increases effectiveness in a reasonably broad range of tasks.
To see why both view A and B seem strange to me, consider the analog for physical tasks. You could say that there is a simple core to human physical manipulation which allows us to solve any problem in some very broad natural domain. Or you could think that we just have a ton of tricks for particular manipulation tasks. But neither of those seems right, there is no simple core to the human body plan but at the same time it contains many features which are helpful across a broad range of tasks.
Regarding the physical manipulation analogy: I think that there actually is a simple core to the human body plan. This core is, more or less: a spine, two arms with joints in the middle, two legs with joints in the middle, feet and arms with fingers. This is probably already enough to qualitatively solve more or less all physical manipulation problems humans can solve. All the nuances are needed to make it quantitatively more efficient and deal with the detailed properties of biological tissues, biological muscles et cetera (the latter might be considered analogous to the detailed properties of computational hardware and input/output channels for brains/AGIs).
I think that your view is plausible enough, however, if we focus only on qualitative performance metrics (e.g. time complexity up to a polynomial, regret bound up to logarithmic factors), then this collection probably includes only a small number of innovations that are important.