“hard step” models predict few sequential hard steps in the few hundred million years since that ancestor
but I do not see why this
count against extreme evolutionary challenge in developing human-level intelligence
I think it’s explained in this passage, but I’m having trouble following the reasoning:
Thus, the “hard steps” model rules out a number of possible “hard intelligence” scenarios: evolution typically may take prohibitively long to get through certain “hard steps”, but, between those steps, the ordinary process of evolution suffices, even without observation selection effects, to create something like the progression we see on Earth. If Earth’s remaining habitable period is close to that given by estimates of the sun’s expansion, observation selection effects could not have given us hundreds or thousands of steps of acceleration, and so could not, for example, have uniformly accelerated the evolution of human intelligence across the last few billion years.
Could the authors, or someone who does understand it, expand on it a bit?
One might have assigned significant prior probability to there being hundreds of sequential hard innovations required for human intelligence, e.g. in brain design. There might have ten, a hundred, a billion. If the hard steps model, combined with substantial remaining time in our habitable window, can lop off almost all of the probability mass assigned to those scenarios (which involve hard intelligence), that is a boost for the easy intelligence hypothesis.
Also, the fewer hard innovations that humans must replicate in creating AI, the more likely we are to succeed.
Thanks, that answers my question. But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right? Why think that overall it gives a boost for the easy intelligence hypothesis?
But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right?
If the steps are sequential, the time to evolve human intelligence is the sum of many independent small steps (without a cutoff) giving you a roughly normal distribution. So if there were a billion steps with step times of a million years, you would expect us to find ourselves much closer to the end of Earth’s habitable window.
Why think that overall it gives a boost for the easy intelligence hypothesis?
Let’s take as our starting point that intelligence is difficult enough to occur in less than 1% of star systems like ours. One supporting argument is that if we started with a flattish prior over difficulty much of the credence for intelligence being at least that easy to evolve would have been on scenarios in which intelligence was easy enough to reliably develop near the beginning of Earth’s habitable window. [See Carter (1983)] Another is the Great Filter, the lack of visible alien intelligence.
So we need some barriers to evolution of intelligence. The hard steps analysis then places limits on their number, and stronger limits on the number that have been made since the development of brains, or primates, suggesting that they will collectively be much easier for engineers to work around than a random draw from our distribution after updating on the above considerations, but before considering the hard steps models.
We had more explanation of this, cut for space constraints. Perhaps we should reinstate it.
I think I understand why
but I do not see why this
I think it’s explained in this passage, but I’m having trouble following the reasoning:
Could the authors, or someone who does understand it, expand on it a bit?
One might have assigned significant prior probability to there being hundreds of sequential hard innovations required for human intelligence, e.g. in brain design. There might have ten, a hundred, a billion. If the hard steps model, combined with substantial remaining time in our habitable window, can lop off almost all of the probability mass assigned to those scenarios (which involve hard intelligence), that is a boost for the easy intelligence hypothesis.
Also, the fewer hard innovations that humans must replicate in creating AI, the more likely we are to succeed.
Thanks, that answers my question. But the hard steps model also rules out scenarios involving many steps, each individually easy, leading to human intelligence, right? Why think that overall it gives a boost for the easy intelligence hypothesis?
If the steps are sequential, the time to evolve human intelligence is the sum of many independent small steps (without a cutoff) giving you a roughly normal distribution. So if there were a billion steps with step times of a million years, you would expect us to find ourselves much closer to the end of Earth’s habitable window.
Let’s take as our starting point that intelligence is difficult enough to occur in less than 1% of star systems like ours. One supporting argument is that if we started with a flattish prior over difficulty much of the credence for intelligence being at least that easy to evolve would have been on scenarios in which intelligence was easy enough to reliably develop near the beginning of Earth’s habitable window. [See Carter (1983)] Another is the Great Filter, the lack of visible alien intelligence.
So we need some barriers to evolution of intelligence. The hard steps analysis then places limits on their number, and stronger limits on the number that have been made since the development of brains, or primates, suggesting that they will collectively be much easier for engineers to work around than a random draw from our distribution after updating on the above considerations, but before considering the hard steps models.
We had more explanation of this, cut for space constraints. Perhaps we should reinstate it.