The problem with the “evolution got really unlucky” assumption is the Fermi paradox. It seems like to resolve the Fermi paradox we basically have to assume that evolution got really lucky at least at some point if we assume the entire Great Filter is already behind us. Of course in principle it’s possible all of this luck was concentrated in an early step like abiogenesis which AI capabilities research has already achieved the equivalent of, and there was no special luck that was needed after that.
The important question seems to be whether we’re already past “the Great Filter” in what makes intelligence difficult to evolve naturally or not. If the difficulty is concentrated in earlier steps then we’re likely already past it and it won’t pose a problem, but e.g. if the apes → humans transition was particularly difficult then it means building AGI might take far more compute than we’ll have at our disposal, or at least that evolutionary arguments cannot put a good bound on how much compute it would take.
The counterargument I give is that Hanson’s model implies that if the apes → humans transition was particularly hard then the number of hard steps in evolution has to be on the order of 100, and that seems inconsistent with both details of evolutionary history (such as how long it took to get multicellular life from unicellular life, for example) and what we think we know about Earth’s remaining habitability lifespan. So the number of hard steps was probably small and that is inconsistent with the apes → humans transition being a hard step.
The problem with the “evolution got really unlucky” assumption is the Fermi paradox. It seems like to resolve the Fermi paradox we basically have to assume that evolution got really lucky at least at some point if we assume the entire Great Filter is already behind us. Of course in principle it’s possible all of this luck was concentrated in an early step like abiogenesis which AI capabilities research has already achieved the equivalent of, and there was no special luck that was needed after that.
The important question seems to be whether we’re already past “the Great Filter” in what makes intelligence difficult to evolve naturally or not. If the difficulty is concentrated in earlier steps then we’re likely already past it and it won’t pose a problem, but e.g. if the apes → humans transition was particularly difficult then it means building AGI might take far more compute than we’ll have at our disposal, or at least that evolutionary arguments cannot put a good bound on how much compute it would take.
The counterargument I give is that Hanson’s model implies that if the apes → humans transition was particularly hard then the number of hard steps in evolution has to be on the order of 100, and that seems inconsistent with both details of evolutionary history (such as how long it took to get multicellular life from unicellular life, for example) and what we think we know about Earth’s remaining habitability lifespan. So the number of hard steps was probably small and that is inconsistent with the apes → humans transition being a hard step.