This has the major assumption that the AI will conclude that it simply isn’t the first to pass the great filter. I suspect that a strong AI in that sort of context would have good reason to think otherwise.
It’s not a direct assumption because an implication of (a) and (b) is that the AI is extremely unlikely to be the first that has passed the great filter. But if the AI believes that no other explanation including the zoo hypothesis has a non-trivial probability of being correct then the AI would conclude that mankind probably is the first to have passed the great filter.
This has the major assumption that the AI will conclude that it simply isn’t the first to pass the great filter. I suspect that a strong AI in that sort of context would have good reason to think otherwise.
It’s not a direct assumption because an implication of (a) and (b) is that the AI is extremely unlikely to be the first that has passed the great filter. But if the AI believes that no other explanation including the zoo hypothesis has a non-trivial probability of being correct then the AI would conclude that mankind probably is the first to have passed the great filter.