I have calculated the number of computer operations used by evolution to evolve the human brain—searching through organisms with increasing brain size - by adding up all the computations that were done by any brains before modern humans appeared. It comes out to 10^43 computer operations. AGI isn’t coming any time soon!
And yet, because your reasoning contains the word “biological”, it is just as invalid and unhelpful as Moravec’s original prediction.
I agree that the conclusion about AGI not coming soon is invalid, so the following isn’t exactly responding to what you say. But: ISTM the evolution thing is somewhat qualitatively different from Moravec or Stack More Layers, in that it softly upper bounds the uncertainty about the algorithmic knowledge needed to create AGI. IDK how easy it would be to implement an evolution that spits out AGI, but that difficulty seems like it should be less conceptually uncertain than the difficulty of understanding enough about AGI to do something more clever with less compute. Like, we could extrapolate out 3 OOMs of compute/$ per decade to get an upper bound: very probably AGI before 2150-ish, if Moore’s law continues. Not very certain, or helpful if you already think AGI is very likely soon-ish, but it has nonzero content.
Like, we could extrapolate out 3 OOMs of compute/$ per decade to get an upper bound: very probably AGI before 2150-ish, if Moore’s law continues.
Projecting Moore’s Law to continue for 130 years more is almost surely incorrect. An upper bound that is conditional on that happening seems devoid of any actual predictive power. If we approach that level of computational power prior to AGI, it will almost surely be through some other mechanism than Moore’s Law, and so would be arbitrarily detached from that timeline.
Well Eliezer did explicitly state that “it was, predictably, a directional overestimate”. His concern was that it is a useless estimate, not that it didn’t roughly bound the amount of computation required.
OpenPhil: Well, search by evolutionary biology is more costly than training by gradient descent, so in hindsight, it was an overestimate. Are you claiming this was predictable in foresight instead of hindsight?
is a strawman. I expect that the 2006 equivalent of OpenPhil would have recognised the evolutionary anchor as a soft upper bound. And I expect current OpenPhil to perfectly well understand the reasons for why this was predictable in foresight.
I agree that the conclusion about AGI not coming soon is invalid, so the following isn’t exactly responding to what you say. But: ISTM the evolution thing is somewhat qualitatively different from Moravec or Stack More Layers, in that it softly upper bounds the uncertainty about the algorithmic knowledge needed to create AGI. IDK how easy it would be to implement an evolution that spits out AGI, but that difficulty seems like it should be less conceptually uncertain than the difficulty of understanding enough about AGI to do something more clever with less compute. Like, we could extrapolate out 3 OOMs of compute/$ per decade to get an upper bound: very probably AGI before 2150-ish, if Moore’s law continues. Not very certain, or helpful if you already think AGI is very likely soon-ish, but it has nonzero content.
Projecting Moore’s Law to continue for 130 years more is almost surely incorrect. An upper bound that is conditional on that happening seems devoid of any actual predictive power. If we approach that level of computational power prior to AGI, it will almost surely be through some other mechanism than Moore’s Law, and so would be arbitrarily detached from that timeline.
Seems right, IDK. But still, that’s a different kind of uncertainty than uncertainty about, like, the shape of algorithm-space.
Well Eliezer did explicitly state that “it was, predictably, a directional overestimate”. His concern was that it is a useless estimate, not that it didn’t roughly bound the amount of computation required.
+1. I will also venture a guess that:
is a strawman. I expect that the 2006 equivalent of OpenPhil would have recognised the evolutionary anchor as a soft upper bound. And I expect current OpenPhil to perfectly well understand the reasons for why this was predictable in foresight.