If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering.
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
It seems like an argument for DOOM—but what if getting this far is simply very difficult?
Then we could be locally first, without hypothesizing that we are surrounded by the ashes of many other failed technological civilisations.
In which case, machine intelligence might well represent humanity’s biggest danger.
Lack of aliens just illustrates the great filter. It doesn’t imply that it lies ahead of us. Indeed, we can see from the billions of years of progress and the recent invention of space travel for the first time that much of it lies behind us.
Indeed, we can see from the billions of years of progress and the recent invention of space travel for the first time that much of it lies behind us.
How does this imply that there’s a lot behind us? It could be that the technology that creates most of the great filter is something that generally arises close to the tech level where one is about to get largescale space travel. (I think the great filter is likely behind us, but looking just at the fact that it has taken us millions of years to get here is not by itself a decent argument that much of the filter is in fact in the past.)
There’s around four-billion years worth of hold-ups behind us—and there’s probably not much longer to wait.
That doesn’t show that most of the risk isn’t ahead—that is a complex speculation based on too many factors to fit into this blog comment. The point is that you can’t argue from no aliens to major future risks—since we might well be past the bulk of the filter.
It seems like an argument for DOOM—but what if getting this far is simply very difficult?
Then we could be locally first, without hypothesizing that we are surrounded by the ashes of many other failed technological civilisations.
In which case, machine intelligence might well represent humanity’s biggest danger.
Lack of aliens just illustrates the great filter. It doesn’t imply that it lies ahead of us. Indeed, we can see from the billions of years of progress and the recent invention of space travel for the first time that much of it lies behind us.
How does this imply that there’s a lot behind us? It could be that the technology that creates most of the great filter is something that generally arises close to the tech level where one is about to get largescale space travel. (I think the great filter is likely behind us, but looking just at the fact that it has taken us millions of years to get here is not by itself a decent argument that much of the filter is in fact in the past.)
There’s around four-billion years worth of hold-ups behind us—and there’s probably not much longer to wait.
That doesn’t show that most of the risk isn’t ahead—that is a complex speculation based on too many factors to fit into this blog comment. The point is that you can’t argue from no aliens to major future risks—since we might well be past the bulk of the filter.