Biological bounds on requirements for human-level AI
Facts about biology bound requirements for human-level AI. In particular, here are two prima facie bounds:
Lifetime. Humans develop human-level cognitive capabilities over a single lifetime, so (assuming our artificial learning algorithms are less efficient than humans’ natural learning algorithms) training a human-level model takes at least the inputs used over the course of babyhood-to-adulthood.
Evolution. Evolution found human-level cognitive capabilities by blind search, so (assuming we can search at least that well, and assuming evolution didn’t get lucky) training a human-level model takes at most the inputs used over the course of human evolution (plus Lifetime inputs, but that’s relatively trivial).
Genome. The size of the human genome is an upper bound on the complexity of humans’ natural learning algorithms. Training a human-level model takes at most the inputs needed to find a learning algorithm at most as complex as the human genome (plus Lifetime inputs, but that’s relatively trivial). (Unfortunately, the existence of human-level learning algorithms of certain simplicity says almost nothing about the difficulty of finding such algorithms.) (Ajeya’s “genome anchor” is pretty different—”a transformative model would . . . have about as many parameters as there are bytes in the human genome”—and makes no sense to me.)
(A human-level AI should use similar computation as humans per subjective time. This assumption/observation is weird and perhaps shows that something weird is going on, but I don’t know how to make that sharp.)
There are few sources of bounds on requirements for human-level AI. Perhaps fundamental limits or reasoning about blind search could give weak bounds, but biology is the only example of human-level cognitive abilities and so the only possible source of reasonable bounds.
Biological bounds on requirements for human-level AI
Facts about biology bound requirements for human-level AI. In particular, here are two prima facie bounds:
Lifetime. Humans develop human-level cognitive capabilities over a single lifetime, so (assuming our artificial learning algorithms are less efficient than humans’ natural learning algorithms) training a human-level model takes at least the inputs used over the course of babyhood-to-adulthood.
Evolution. Evolution found human-level cognitive capabilities by blind search, so (assuming we can search at least that well, and assuming evolution didn’t get lucky) training a human-level model takes at most the inputs used over the course of human evolution (plus Lifetime inputs, but that’s relatively trivial).
Genome. The size of the human genome is an upper bound on the complexity of humans’ natural learning algorithms. Training a human-level model takes at most the inputs needed to find a learning algorithm at most as complex as the human genome (plus Lifetime inputs, but that’s relatively trivial). (Unfortunately, the existence of human-level learning algorithms of certain simplicity says almost nothing about the difficulty of finding such algorithms.) (Ajeya’s “genome anchor” is pretty different—”a transformative model would . . . have about as many parameters as there are bytes in the human genome”—and makes no sense to me.)
(A human-level AI should use similar computation as humans per subjective time. This assumption/observation is weird and perhaps shows that something weird is going on, but I don’t know how to make that sharp.)
There are few sources of bounds on requirements for human-level AI. Perhaps fundamental limits or reasoning about blind search could give weak bounds, but biology is the only example of human-level cognitive abilities and so the only possible source of reasonable bounds.
Related: Ajeya Cotra’s Forecasting TAI with biological anchors (most relevant section) and Eliezer Yudkowsky’s Biology-Inspired AGI Timelines: The Trick That Never Works.