To quickly escape the great filter should we flood our galaxy with radio signals? While communicating with fellow humans we already send out massive amounts of information that an alien civilization could eventually pickup, but should we engage in positive SETI? Or, if you fear the attention of dangerous aliens, should we set up powerful long-lived solar or nuclear powered automated radio transmitters in the desert and in space that stay silent so long as they receive a yearly signal from us, but then if they fail to get the no-go signal because our civilization has fallen, continuously transmit our dead voice to the stars? If we do destroy ourselves it would be an act of astronomical altruism to warn other civilizations of our fate especially if we broadcasted news stories from just before our demise, e.g. physicists excited about a new high energy experiment.
Something prevents solar systems from giving birth to space faring civilizations. Robin Hanson has called this the great filter. Stuart Armstrong and Anders Sandberg show that it would take an advanced civilization a trivial amount of effort to seed nearby galaxies with self-replicating intelligences. Since we seem pretty close to being able to expand throughout the stars ourselves, especially if the singularity is near, if much of the great filter lies in front of us, we are probably doomed. For reasons I won’t go into here, (but see this) there is good reason to believe that much of the great filter does lie before us (although Scott Alexander has a different view). Since I don’t want this post to be about the causes of the Fermi paradox, let’s make the following doomed assumption:
With high probability there has existed a large number of civilizations in our galaxy that equaled or exceeded our current level of technological development and would have gone on to make their presence felt throughout the galaxy, but they all suffered some disaster preventing this expansion. And assume with high probability that the number of civilizations that reached our level of development but soon collapsed greatly exceed the number that reached our level of development and survived at least another thousand years because if this were not true we would almost certainly have seen evidence of extraterrestrial intelligences by now.
Accepting the doomed assumption gives us an outside view of the probability of our species’ survival. An inside view would basically sum up all of the possible causes of our civilization’s collapse, whereas the outside view would say that since with high probability many civilizations reached our level of development and then fell, we too will probably fail even if we can’t think of enough inside view reasons for why we are likely doomed.
To help think through what we should do if we believe the doomed assumption, consider the following analogy: Imagine you’re a gladiator who must defeat one more opponent to achieve your seventh victory. If a gladiator in your city beats seven opponents he gets his name forever engraved on the coliseum walls and is granted freedom. All matches in your coliseum are to the death. Sizing up your next opponent, you at first give yourself an excellent chance of victory as it seems that several of your past opponents were stronger than this next guy, and you are in top condition. But then you realize that no gladiator has ever had his name on the walls, all died before winning their seventh victory and you take this as a horrible sign. The coliseum has been around for a long, long time and since the beginning there was the rule that if you win seven victories you get your name immortalized. You become convinced that you will die in the next match and decide that you might as well have fun while you can, so you abandon your diet and training for wine and prostitutes.
Your master becomes concerned at your behavior, and when you explain to him how you think it nearly impossible that you alone of all the gladiators who have ever fought in your coliseum will survive long enough to get his name inscribed on the wall, he offers you a deal. If you give your master a few gold pieces he will bribe the stadium owner to permanently write your name on the coliseum wall before the next fight, and he credibly promises that even if you lose your name will remain. Should you pay? Inscribing your name would do nothing to make your next opponent weaker, but once your name is engraved you no longer need fear the outside view assessment that you won’t be able to win because you are not special enough to alone have your name inscribed. If you are extremely perplexed that in the history of the coliseum no other gladiator managed to win enough fights to get his name listed you might decide that there is some unknown X factor working against any gladiator in your position and even if you can’t identify X and so can’t imagine from an inside view how just getting your name inscribed will help you overcome X, you might decide that this paradox merely means you can’t trust your inside view and so you should do whatever you can, no matter how seemingly silly, to make the outside view apply with less force to your predicament.
I wonder if we are in a similar situation with regards to positive SETI. For me at least, the Fermi paradox and the great filter create a conflict between my inside and outside assessments of the chances of our high technology civilization surviving long enough to make us known to species at our level of development. Flooding the galaxy with signals, even if just conditional on our civilization’s collapse, would significantly reduce the odds that we won’t survive long enough to reveal our existence to other civilizations at our level of development if such civilizations are commonplace. Flooding, consequently, would from an outside view make me more optimistic about the chances of our survival. Of course, if we model other civilizations as being somewhat rational actors then the fact that they have seemingly chosen to not flood the galaxy with radio signals should cause us to be more reluctant to do so.
You might argue that I’m confusing map and territory, but consider an extreme example. First, pretend scientists make two horrifying discoveries:
1) They find multicellular life on one of Saturn’s moons that genetic analysis proves arose independent of life on Earth.
2) They uncover ruins of an ancient dinosaur civilization proving that some species of dinosaurs achieved around human level intelligence, despite the common ancestor of this species and mankind being unintelligent.
These two findings would provide massive Bayesian evidence that intelligent life in our galaxy is almost certainly commonplace and make the only real candidates for the Fermi paradox the zoo hypothesis (which I think is unlikely) or a late great filter. But now imagine that Elon Musk’s fear of the great filter motivates him to develop a SpaceX-hyperloop transmitter that simultaneously sends a powerful signal to every single star system in the galaxy which any civilization at our level of development would find and recognize as being sent by an extraterrestrial intelligence. Plus, once activated the transmitter must by the laws of physics keep operating for the next billion years. After the transmitter had been turned on, wouldn’t you become more optimistic about mankind’s survival even if the transmitter had no other practical purpose? And if Musk were going to turn on his transmitter tomorrow would you fear that the great filter is on the verge of annihilating us?
[This post greatly benefited from a discussion I had with Stuart Armstrong, although he doesn’t necessarily agree with the post’s contents.]
Quickly passing through the great filter
To quickly escape the great filter should we flood our galaxy with radio signals? While communicating with fellow humans we already send out massive amounts of information that an alien civilization could eventually pickup, but should we engage in positive SETI? Or, if you fear the attention of dangerous aliens, should we set up powerful long-lived solar or nuclear powered automated radio transmitters in the desert and in space that stay silent so long as they receive a yearly signal from us, but then if they fail to get the no-go signal because our civilization has fallen, continuously transmit our dead voice to the stars? If we do destroy ourselves it would be an act of astronomical altruism to warn other civilizations of our fate especially if we broadcasted news stories from just before our demise, e.g. physicists excited about a new high energy experiment.
Something prevents solar systems from giving birth to space faring civilizations. Robin Hanson has called this the great filter. Stuart Armstrong and Anders Sandberg show that it would take an advanced civilization a trivial amount of effort to seed nearby galaxies with self-replicating intelligences. Since we seem pretty close to being able to expand throughout the stars ourselves, especially if the singularity is near, if much of the great filter lies in front of us, we are probably doomed. For reasons I won’t go into here, (but see this) there is good reason to believe that much of the great filter does lie before us (although Scott Alexander has a different view). Since I don’t want this post to be about the causes of the Fermi paradox, let’s make the following doomed assumption:
With high probability there has existed a large number of civilizations in our galaxy that equaled or exceeded our current level of technological development and would have gone on to make their presence felt throughout the galaxy, but they all suffered some disaster preventing this expansion. And assume with high probability that the number of civilizations that reached our level of development but soon collapsed greatly exceed the number that reached our level of development and survived at least another thousand years because if this were not true we would almost certainly have seen evidence of extraterrestrial intelligences by now.
Accepting the doomed assumption gives us an outside view of the probability of our species’ survival. An inside view would basically sum up all of the possible causes of our civilization’s collapse, whereas the outside view would say that since with high probability many civilizations reached our level of development and then fell, we too will probably fail even if we can’t think of enough inside view reasons for why we are likely doomed.
To help think through what we should do if we believe the doomed assumption, consider the following analogy: Imagine you’re a gladiator who must defeat one more opponent to achieve your seventh victory. If a gladiator in your city beats seven opponents he gets his name forever engraved on the coliseum walls and is granted freedom. All matches in your coliseum are to the death. Sizing up your next opponent, you at first give yourself an excellent chance of victory as it seems that several of your past opponents were stronger than this next guy, and you are in top condition. But then you realize that no gladiator has ever had his name on the walls, all died before winning their seventh victory and you take this as a horrible sign. The coliseum has been around for a long, long time and since the beginning there was the rule that if you win seven victories you get your name immortalized. You become convinced that you will die in the next match and decide that you might as well have fun while you can, so you abandon your diet and training for wine and prostitutes.
Your master becomes concerned at your behavior, and when you explain to him how you think it nearly impossible that you alone of all the gladiators who have ever fought in your coliseum will survive long enough to get his name inscribed on the wall, he offers you a deal. If you give your master a few gold pieces he will bribe the stadium owner to permanently write your name on the coliseum wall before the next fight, and he credibly promises that even if you lose your name will remain. Should you pay? Inscribing your name would do nothing to make your next opponent weaker, but once your name is engraved you no longer need fear the outside view assessment that you won’t be able to win because you are not special enough to alone have your name inscribed. If you are extremely perplexed that in the history of the coliseum no other gladiator managed to win enough fights to get his name listed you might decide that there is some unknown X factor working against any gladiator in your position and even if you can’t identify X and so can’t imagine from an inside view how just getting your name inscribed will help you overcome X, you might decide that this paradox merely means you can’t trust your inside view and so you should do whatever you can, no matter how seemingly silly, to make the outside view apply with less force to your predicament.
I wonder if we are in a similar situation with regards to positive SETI. For me at least, the Fermi paradox and the great filter create a conflict between my inside and outside assessments of the chances of our high technology civilization surviving long enough to make us known to species at our level of development. Flooding the galaxy with signals, even if just conditional on our civilization’s collapse, would significantly reduce the odds that we won’t survive long enough to reveal our existence to other civilizations at our level of development if such civilizations are commonplace. Flooding, consequently, would from an outside view make me more optimistic about the chances of our survival. Of course, if we model other civilizations as being somewhat rational actors then the fact that they have seemingly chosen to not flood the galaxy with radio signals should cause us to be more reluctant to do so.
You might argue that I’m confusing map and territory, but consider an extreme example. First, pretend scientists make two horrifying discoveries:
1) They find multicellular life on one of Saturn’s moons that genetic analysis proves arose independent of life on Earth.
2) They uncover ruins of an ancient dinosaur civilization proving that some species of dinosaurs achieved around human level intelligence, despite the common ancestor of this species and mankind being unintelligent.
These two findings would provide massive Bayesian evidence that intelligent life in our galaxy is almost certainly commonplace and make the only real candidates for the Fermi paradox the zoo hypothesis (which I think is unlikely) or a late great filter. But now imagine that Elon Musk’s fear of the great filter motivates him to develop a SpaceX-hyperloop transmitter that simultaneously sends a powerful signal to every single star system in the galaxy which any civilization at our level of development would find and recognize as being sent by an extraterrestrial intelligence. Plus, once activated the transmitter must by the laws of physics keep operating for the next billion years. After the transmitter had been turned on, wouldn’t you become more optimistic about mankind’s survival even if the transmitter had no other practical purpose? And if Musk were going to turn on his transmitter tomorrow would you fear that the great filter is on the verge of annihilating us?
[This post greatly benefited from a discussion I had with Stuart Armstrong, although he doesn’t necessarily agree with the post’s contents.]