If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying “Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it’s an hour or twelve days, given anything like current setups.” Also insert boilerplate “essentially constant human brain architectures, no recursive self-improvement, evolutionary difficulty curves bound above human difficulty curves, etc.” for more despair.
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization. I am deeply impressed by what has come out of the bare minimum of human innovative ability plus cultural accumulation. You say “The engine is slow,” I say “The engine hasn’t stalled, and look how easy it is to speed up!”
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization.
I agree with this, and I think that you are hitting on a key a reason that these debates don’t hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth
a) the evolution of individually smarter humans using general reasoning ability was the key factor
b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor
...either way, there’s no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI. I agree with Rohin Shah’s takeaway on that post :
I actually don’t think that this suggests that AI development will need both social and asocial learning: it seems to me that in this model, the need for social learning arises because of the constraints on brain size and the limited lifetimes. Neither of these constraints apply to AI—costs grow linearly with “brain size” (model capacity, maybe also training time) as opposed to superlinearly for human brains, and the AI need not age and die. So, with AI I expect that it would be better to optimize just for asocial learning, since you don’t need to mimic the transmission across lifetimes that was needed for humans.
(To be clear, the thing you quoted was commenting on the specific argument presented in that post. I do expect that in practice AI will need social learning, simply because that’s how an AI system could make use of the existing trove of knowledge that humans have built.)
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
FWIW when I use the word discontinuous in these contexts, I’m almost always referring to the definition Katja Grace uses,
We say a technological discontinuity has occurred when a particular technological advance pushes some progress metric substantially above what would be expected based on extrapolating past progress. We measure the size of a discontinuity in terms of how many years of past progress would have been needed to produce the same improvement. We use judgment to decide how to extrapolate past progress.
This is quite different than the mathematical definition of continuous.
If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying “Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it’s an hour or twelve days, given anything like current setups.” Also insert boilerplate “essentially constant human brain architectures, no recursive self-improvement, evolutionary difficulty curves bound above human difficulty curves, etc.” for more despair.
I guess even though I don’t disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don’t see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don’t even approach the physical limits on speed, we can’t run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn’t apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization. I am deeply impressed by what has come out of the bare minimum of human innovative ability plus cultural accumulation. You say “The engine is slow,” I say “The engine hasn’t stalled, and look how easy it is to speed up!”
I’m not sure I like using the word ‘discontinuous’ to describe any real person’s position on plausible investment-output curves any longer; people seem to think it means “intermediate value theorem doesn’t apply,” (which seems reasonable) when usually hard/fast takeoff proponents really mean “intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals.”
That was a pretty good Eliezer model; for a second I was trying to remember if and where I’d said that.
I agree with this, and I think that you are hitting on a key a reason that these debates don’t hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth
a) the evolution of individually smarter humans using general reasoning ability was the key factor
b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor
...either way, there’s no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI. I agree with Rohin Shah’s takeaway on that post :
(To be clear, the thing you quoted was commenting on the specific argument presented in that post. I do expect that in practice AI will need social learning, simply because that’s how an AI system could make use of the existing trove of knowledge that humans have built.)
FWIW when I use the word discontinuous in these contexts, I’m almost always referring to the definition Katja Grace uses,
This is quite different than the mathematical definition of continuous.