I can give further evidence that this scenario is at least somewhat probable.
Due to the anthropic principle, general intelligence could have a one-in-an-octillion chance of ever randomly evolving, anywhere, ever, and we would still be here observing all the successful steps having happened, because if all the steps didn’t happen then we wouldn’t be here observing anything. There would still be tons of animals like ants and chimpanzees because evolution always creates a ton of alternative “failed” offshoots. So it’s always possible that there’s some logical process that’s necessary for general intelligence, and we’re astronomically unlikely to discover it randomly, through brute forcing or even innovation, until we pinpoint all the exact lines of code in the human brain that distinguishes our intelligence from chimpanzees.
Basically, the anthropic principle indicates the possibility of at least one more AI winter ahead, since even if we suddenly pumped out an AI at the chimpanzee level of intelligence, we could still be astronomically far away from the last steps for human-level general intelligence. We just have no idea how unlikely human-level intelligence is to emerge randomly, just like how we have no idea how unlikely life is to emerge randomly.
However, it’s only a possibility that this is the case. General intelligence could still be easy to brute force, and we’d also still be here. The recent pace of AI development definitely indicates bad news for AGI timelines, and it doesn’t make sense to unplug a warning light instead of looking for the hazard it corresponds to.
But in terms of “log odds of human survival beyond 20 years”, that’s a pretty unreasonable estimate. There isn’t nearly enough evidence to conclude that the human race is “almost certainly doomed soon”, only “significantly more likely than before to worry about nearer-term AGI”.
I can give further evidence that this scenario is at least somewhat probable.
Due to the anthropic principle, general intelligence could have a one-in-an-octillion chance of ever randomly evolving, anywhere, ever, and we would still be here observing all the successful steps having happened, because if all the steps didn’t happen then we wouldn’t be here observing anything. There would still be tons of animals like ants and chimpanzees because evolution always creates a ton of alternative “failed” offshoots. So it’s always possible that there’s some logical process that’s necessary for general intelligence, and we’re astronomically unlikely to discover it randomly, through brute forcing or even innovation, until we pinpoint all the exact lines of code in the human brain that distinguishes our intelligence from chimpanzees.
Basically, the anthropic principle indicates the possibility of at least one more AI winter ahead, since even if we suddenly pumped out an AI at the chimpanzee level of intelligence, we could still be astronomically far away from the last steps for human-level general intelligence. We just have no idea how unlikely human-level intelligence is to emerge randomly, just like how we have no idea how unlikely life is to emerge randomly.
However, it’s only a possibility that this is the case. General intelligence could still be easy to brute force, and we’d also still be here. The recent pace of AI development definitely indicates bad news for AGI timelines, and it doesn’t make sense to unplug a warning light instead of looking for the hazard it corresponds to.
But in terms of “log odds of human survival beyond 20 years”, that’s a pretty unreasonable estimate. There isn’t nearly enough evidence to conclude that the human race is “almost certainly doomed soon”, only “significantly more likely than before to worry about nearer-term AGI”.