Artificial Intelligence could explode in power and leave the direct control of humans in the next century or so. It may then move on to optimize the reachable universe to its goals. Some think this sequence of events likely.
If this occurred, it would constitute an instance of our star passing the entire Great Filter. If we should cause such an intelligence explosion then, we are the first civilization in roughly the past light cone to be in such a position. If anyone else had been in this position, our part of the universe would already be optimized, which it arguably doesn’t appear to be. This means that if there is a big (optimizing much of the reachable universe) AI explosion in our future, the entire strength of the Great Filter is in steps before us.
This means a big AI explosion is less likely after considering the strength of the Great Filter, and much less likely if one uses the Self Indication Assumption (SIA).
SIA implies that we are unlikely to give rise to an intelligence explosion for similar reasons, but probably much more strongly.
In summary, if you begin with some uncertainty about whether we precede an AI explosion, then updating on the observed large total filter and accepting SIA should make you much less confident in that outcome.
The utility of an anthropic approach to this issue seems questionable, though. The great silence tells something—something rather depressing—it is true… but it is far from our only relevant source of information on the topic. We have an impressive mountain of other information to consider and update on.
To give but one example, we don’t yet see any trace of independently-evolved micro-organisms on other planets. The less evidence for independent origins of life elsewhere there is, the more that suggests a substantial early filter—and the less need there is for a late one.
This is true—but because it does not suggest THE END OF THE WORLD—it is not so newsworthy. Selective reporting favours apocalyptic elements. Seeing only the evidence supporting one side of such stories seems likely to lead to people adopting a distorted world view, with inacurate estimates of the risks.
SIA says AI is no big threat
This summary seems fairly accurate:
The utility of an anthropic approach to this issue seems questionable, though. The great silence tells something—something rather depressing—it is true… but it is far from our only relevant source of information on the topic. We have an impressive mountain of other information to consider and update on.
To give but one example, we don’t yet see any trace of independently-evolved micro-organisms on other planets. The less evidence for independent origins of life elsewhere there is, the more that suggests a substantial early filter—and the less need there is for a late one.
This is true—but because it does not suggest THE END OF THE WORLD—it is not so newsworthy. Selective reporting favours apocalyptic elements. Seeing only the evidence supporting one side of such stories seems likely to lead to people adopting a distorted world view, with inacurate estimates of the risks.