The Great Filter isn’t magic either
Crossposted at Less Wrong 2.0. A post suggested by James Miller’s presentation at the Existential Risk to Humanity conference in Gothenburg.
Seeing the emptiness of the night sky, we can dwell upon the Fermi paradox: where are all the alien civilizations that simple probability estimates imply we should be seeing?
Especially given the ease of moving within and between galaxies, the cosmic emptiness implies a Great Filter: something that prevents planets from giving birth to star-spanning civilizations. One worrying possibility is the likelihood that advanced civilizations end up destroying themselves before they reach the stars.
The Great Filter as an Outside View
In a sense, the Great Filter can be seen as an ultimate example of the Outside View: we might have all the data and estimation we believe we would ever need from our models, but if those models predict that the galaxy should be teeming with visible life, then it doesn’t matter how reliable our models seem: they must be wrong.
In particular, if you fear a late great filter—if you fear that civilizations are likely to destroy themselves—then you should increase your fear, even if “objectively” everything seems to be going all right. After all, presumably the other civilizations that destroyed themselves thought everything seemed to going all right. Then you can adjust your actions using your knowledge of the great filter—but presumably other civilizations also thought of the great filter and adjusted their own actions as well, but that didn’t save them, so maybe you need to try something different again or maybe you can do something that breaks the symmetry from the timeless decision theory perspective like send a massive signal to the galaxy...
The Great Filter isn’t magic
It can all get very headache-inducing. But, just as the Outside View isn’t magic, the Great Filter isn’t magic either. If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this. What can we say, if we look analytically at the great filter argument?
First of all suppose we had three theories—early great filter (technological civilizations are rare), late great filter (technological civilizations destroy themselves before becoming space-faring), or no great filter. Then we look up at the empty skies, and notice no aliens. This rules out the third theory, but leaves the relative probabilities of the other two intact.
Then we can look at objective evidence. Is human technological civilization likely to end in a nuclear war? Possibly, but are the odds in the 99.999% range that would be needed to explain the Fermi Paradox? Every year that has gone by has reduced the likelihood that nuclear war is very very very very likely. So a late Great Filter may seemed quite probable compared with an early one, but much of the evidence we see is against it (especially if we assume that AI—which is not a Great Filter! - might have been developed by now). Million-to-one prior odds can be overcome by merely 20 bits of information.
And what about the argument that we have to assume that prior civilizations would also have known of the Great Filter and thus we need to do more than they would have? In your estimation, is the world currently run by people taking the Great Filter arguments seriously? What is the probability that the world will be run by people that take the Great Filter argument seriously? If this probability is low, we don’t need to worry about the recursive aspect; the ideal situation would be if we can achieve:
-
Powerful people taking the Great Filter argument seriously.
-
Evidence that it was hard to make powerful people take the argument seriously.
Of course, successfully achieving 1 is evidence against 2, but the Great Filter doesn’t work by magic. If it looks like we achieved something really hard, then that’s some evidence that it is hard. Every time we find something unlikely with a late Great Filter, that shifts some of the probability mass away from the late great filter and into alternative hypotheses (early Great Filter, zoo hypothesis,...).
Variance and error of xrisk estimates
But let’s focus narrowly on the probability of the late Great Filter.
Current estimates for the risk of nuclear war are uncertain, but let’s arbitrarily assume that the risk is 10% (overall, not per year). Suppose one of two papers comes out:
-
Paper A shows that current estimates of nuclear war have not accounted for a lot of key facts; when these facts are added in, the risk of nuclear war drops to 5%.
-
Paper B is a massive model of international relationships with a ton of data and excellent predictors and multiple lines of evidence, all pointing towards the real risk being 20%.
What would either paper mean from the Great Filter perspective? Well, counter-intuitively, papers like A typically increase the probability for nuclear war being a Great Filter, while papers like B decrease it. This is because none of 5%, 10%, and 20% are large enough to account for the Great Filter, which requires probabilities in the 99.99% style. And, though paper A decreases the probability of the nuclear war, it also leaves more room for uncertainties—we’ve seen that a lot of key facts were missing in previous papers, so it’s plausible that there are key facts still missing from this one. On the other hand, though paper B increases the probability, it makes it unlikely that the probability will be raised any further.
So if we fear the Great Filter, we should not look at risks whose probabilities are high, but risks who’s uncertainty is high, where the probability of us making an error is high. If we consider our future probability estimates as a random variable, then the one whose variance is higher is the one to fear. So a late Great Filter would make biotech risks even worse (current estimates of risk are poor) while not really changing asteroid impact risks (current estimates of risk are good).
I’m tempted to suggest that the field of interstellar futurology has two big questions that both have very wide error bars which each, considered one at a time, suggest the need for some other theory (outside the horizon of common reasoning) to produce an answer.
It makes me wonder how plausible it is that these questions are related, and help answer each other:
(1) How many other species are out there for us to meet?
(2) Will we will ever go out there or not?
For the first question, Occam suggests that we consider small numbers like “0” or “1″, or else that we consider simple evolutionary processes that can occur everywhere and imply numbers like “many”.
Observational evidence (as per Fermi) so far rules out “many”.
Our own late-in-the-universe self-observing existence with plausible plans for expansion into space (which makes the answer to the second questions seem like it could be yes) suggests that 0 aliens out there is implausible… so what about just going with 1?
This 1 species would not be “AN alien race” but rather “THE alien race”. They would be simply the one minimal other alien race whose existence is very strongly implied by minimal evidence plus logical reasoning.
Looping back to the second question of interstellar futurology (and following Occam and theoretical humility in trying to keep the number of theoretical elements small) perhaps the answer to whether our descendants will be visible in the skies of other species is “no with 99.99% probability” because of THE alien race.
When I hear “the zoo hypothesis” this logically simple version, without lots of details, is what I usually think of: Simply that there is “some single thing” and for some reason it makes the sky empty and forecloses our ever doing anything that would make the sky of another species NOT empty.
However, wikipedia’s zoo hypothesis is full of crazy details about politics and culture and how moral progress is going to somehow make every species converge on the one clear moral rule of not being visible to any other species at our stage or below, so somehow we ourselves (and every single other species among the plausibly “many”) are also in some sense “THE (culturally convergently universal) species” which is the space civilization that sprouts everywhere and inevitably convergently evolves into maintaining the intergalactic zoo.
Yeah. This is all very nice… but it seems both very detailed and kind of hilariously optimistic… like back when the Soviet Union’s working theory was that of course the aliens would be socialist… and then data came in and they refused to give up on the optimism even though it no longer made sense, so they just added more epicycles and kept chugging away.
I’m reminded of the novels of Alastair Reynolds where he calls THE alien race “The Inhibitors”.
Reynolds gave them all kinds of decorative details that might be excused by the demand that commercial science fiction have dramatically compelling plots… However one of their details was that they were a galactic rather than intergalactic power. This seems like a really critical strategic fact that can’t be written off as a detail added for story drama, and so that detail counts against the science side of his work. Too much detail of the wrong sort!
In the spirit of theoretical completeness, consider pairing the “optimistic zoo theory” with a more “pessimistic zoo theory”.
In the pessimistic version THE intergalactic alien race is going to come here and kill us. Our chance of preventing this extermination is basically “the number of stars we see that seem to have been the origin of a a visible and friendly intergalactic civilization (plus one as per Laplace’s rule of succession) divided by the number of stars where a civilization with this potential could have developed”.
By my count our chance of surviving using this formula would be ((0 + 1) / 10 ^ BIG).
Like if it was you versus a weed in your garden, the weed’s chances of surviving you are better than humanity’s chances of surviving THE aliens.
Lower? Yes! At least individual weeds evolved under the selection pressure of animal grazing, and individual weeds have a plant genome full of survival wisdom to use to fight a human weeder, who is doing something more or less similar to what grazing animals do.
So the only strong observational argument I can see against the pessimistic zoo theory is that if it were true, then to square with what we see we have to suppose that THE alien weeders would bother with camouflage that SETI can’t penetrate.
Consider all the potentially valuable things they could do with the universe that would tip us off right away, and then consider the cost of being visible. Would it be worth it for THE aliens (the first and only old intergalactic alien race) to hide in this way? I would not naively expect it.
Naively I’d have thought that the shape of the galaxy and its contents would be whatever they wanted it to be, and that attempts to model galactic orbital and/or stellar histories would point the finger at a non-obvious causes, with signs of design intent relative to some plausible economic goal. Like this work but with more attention to engineering intent.
So a good argument against this kind of pessimism seems like it would involve calculation of the costs and benefits of visible projects versus the benefits and costs of widespread consistent use of stealth technology.
If stealth is not worth it, then the Inhibitors (or Weeders or whatever you want to call THE aliens) wouldn’t bother with hiding and the lack of evidence of their works would be genuine evidence that they don’t exist.
The pessimistic zoo theory makes this proposal seem heroic to me :-)
The hard part here seems like it would be to figure out if there is anything humans can possibly build in the next few decades (or centuries?) that might continue to send a signal for the next 10 million years (in a way we could have detected in the 1970s) and that will continue to function despite THE alien race’s later attempts to turn it off after they kill us because it messes up their stealth policy.
My guess is that the probability of an enduring “existence signal” being successfully constructed and then running for long enough to be detected by many other weed species is actually less than the probability that we might survive, because an enduring signal implies a kind of survival.
By contrast, limited “survival” might happen if samples of earth are taken just prior to a basically successful weeding event...
Greg Bear’s “The Forge Of God” and sequel “Anvil of Stars” come to mind here. In those books Bear developed an idea that space warfare might be quite similar to submarine warfare, with silence and passive listening being the fundamental rule, most traps and weapons optimized for anonymous or pseudo-natural deployment, and traceable high energy physical attacks with visibly unnatural sources very much the exception.
As with all commercially viable books, you’ve got to have hope in there somewhere, so Bear populated the sky with >1 camouflaged god-like space civilizations that arrive here at almost precisely the same time, and one of them saves us in a way that sort of respects our agency but it leaves us making less noise than before. This seems optimistic in a way that Occam would justifiably complain about, even as it makes the story more fun for humans to read...
I’ve thought about things like that before, but always dismissed them, not as wrong but as irrelevant—there is nothing that can be done about that, as they would certainly have a fully armed listening post somewhere in the solar system to put us down when the time comes (though the fact they haven’t yet is an argument against their existence).
But since there’s nothing to be done, I ignore the hypothesis in practice.
I see how arguments that “the great filter is extremely strong” generally suggests that any violent resistance against an old race of exterminators is hopeless.
However it seems to me as if the silent sky suggests that everything is roughly equally hopeless. Maybe I’m missing something here, and if so I’d love to be corrected :-)
But starting from this generic evidential base, if everything is hopeless because of the brute fact of the (literally astronomically) large silent sky (with the strength of this evidence blocking nearly every avenue of hope for the future), I’m reasonably OK with allocating some thought to basically every explanation of the silent sky that has a short description length, which I think includes the pessimistic zoo hypothesis...
Thinking about this hypothesis might suggest methods to timelessly coordinate with other “weed species”? And this or other thoughts might suggest new angles on SETI? What might a signal look like from another timelessly coordinating weed species? This sort of thinking seems potentially productive to me...
HOWEVER, one strong vote against discussing the theory is that the pessimistic zoo hypothesis is an intrinsically “paranoid” hypothesis. The entities postulated include an entity of unknown strength that might be using its strength to hide itself… hence: paranoia.
Like all paranoid theories there is a sort of hope function where each non-discovery of easy/simple evidence for the existence of a hostile entity marginally increases both (1) the probability that the entity does not exist, and (2) the probability that if the entity exists it is even better at hiding from you than you had hypothesized when you searched in a simple place with the mild anticipation of seeing it.
At the end of a fruitless but totally comprehensive search of this sort you either believe that the entity does not physically exist, or else you think that it is sort of “metaphysically strong”.
The recently popular “Three Body Problem” explores such paranoia a bit with regard to particle physics. Also, the powers seen in the monolith of Clarke’s “2001″ comes to mind (although that seemed essentially benevolent and weak compared to what might be seen in a fully bleak situation) and Clarke himself coined the phrase claiming “sufficiently advanced technology is indistinguishable from magic” in order partly to justify some of what he wrote as being respectable-enough-for-science-fiction I think.
This brings up a sort of elephant in the room: paranoid hypotheses are often a cognitive tarpit that captures the fancy of the mentally ill and/or theologically inclined people.
The hallmarks of bad thinking here tend to be (1) updating too swiftly in the direction of extreme power on the part of the hidden entity, (2) getting what seem like a lot of false positives when analyzing situations where the entity might have intervened, and (3) using the presumed interventions to confabulate motives.
To discuss a paranoid hypothesis in public risks the speaker becoming confused in the mind of the audience with other people who entertain paranoid hypotheses with less care.
It would make a lot of sense to me to me if respectable thinkers avoid discussing the subject for this reason.
If I was going to work here in public, I think it would be useful to state up front that I’d refrain from speculating about precise motives for silencing weed species like we might be. Also, if I infer extremely strong aliens I’m going hold off on using their inferred strength to explain anything other than astronomy data, and even that only reluctantly.
Also, I’d start by hypothesizing aliens that are extremely weak and similar to conventionally imaginable human technology that might barely be up to the task of suppression, and thoroughly rule that level of power out before incrementing the hypothesized power by a small amount.
Unless we assume the filter is behind us.
Just by the fact they can cross between the stars imply they can divert an asteroid to slam into the Earth. This gives an idea what we’d need to do to defend against them, in theory.
“If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this.”
Not necessarily something specific. It could be caused by general phenomena.
If we fear the Great Filter, we should not look at risks whose probabilities are high, but risks who’s uncertainty is high, where the probability of us making an error is high. 192.168.0.1