Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets. Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone. Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.
A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.
Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data. If we survive and create superintelligence, we will probably do just that.
In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.
The Past
While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.
Here are a few highlight developments from the last few decades relating to an early filter:
The time window between formation of earth and earliest life has been narrowed to a brief interval. Panspermia has also gained ground, with some recent complexity arguments favoring a common origin of life at 9 billion yrs ago.[1]
Discovery of various extremophiles indicate life is robust to a wider range of environments than the norm on earth today.
Advances in neuroscience and studies of animal intelligence lead to the conclusion that the human brain is not nearly as unique as once thought. It is just an ordinary scaled up primate brain, with a cortex enlarged to 4x the size of a chimpanzee. Elephants and some cetaceans have similar cortical neuron counts to the chimpanzee, and demonstrate similar or greater levels of intelligence in terms of rituals, problem solving, tool use, communication, and even understanding rudimentary human language. Elephants, cetaceans, and primates are widely separated lineages, indicating robustness and inevitability in the evolution of intelligence.
So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction—but see this reply for an argument for an early filter).
The Future(s)
When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past. The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.
An abridged taxonomy of future civilization trajectories :
Collapse/Extinction:
Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years. Given the potential dangers of nanotech/AI/nuclear weapons—and then aliens, I believe this possibility is significant—ie in the 1% to 50% range.
Biological/Mixed Civilization:
This is the old-skool sci-fi scenario. Humans or our biological descendants expand into space. AI is developed but limited to human intelligence, like CP30. No or limited uploading.
This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.
This scenario is almost not worth mentioning: prior < 1%. Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.
PostBiological Warm-tech AI Civilization:
This is Kurzweil/Moravec’s sci-fi scenario. Humans become postbiological, merging with AI through uploading. We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium. This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.
One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency. Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.
Thus to understand the physical form of future civs, we need to understand the physical limits of computation.
One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules. At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit. Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit. Reliable erasure requires some multiple of the minimal expenditure—a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today’s levels of reliability.
Now, the second key consideration is that Landauer’s Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing. Just moving bits around dissipates energy.
Moore’s Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.
I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.
From Warm-tech to Cold-tech
There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.
The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:
There is the obvious immediate gain that one gets from lowering the cost of bit erasures: a bit erasure at room temperature costs 100 times more than a bit erasure at the cosmic background temperature, and a hundred thousand times more than an erasure at 0.01K (the current achievable limit for large objects)
Low temperatures are required for most superconducting materials regardless.
The delicate coherence required for practical quantum computation requires or works best at ultra low temperatures.
At a more abstract level, the essence of computation is precise control over the physical configurations of a device as it undergoes complex state transitions. Noise/entropy is the enemy of control, and temperature is a form of noise.
Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine—even a single misplaced high energy particle could cause enormous damage.
In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible. The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better. The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint. These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.
Stellar Escape Trajectories
For a cold-tech civilization, one interesting long term strategy involves escaping the local star’s orbit to reach the colder interstellar medium, and eventually the intergalactic medium.
If we assume that these future civs have long planning horizons (reasonable), we can consider this an investment that has an initial cost in terms of the energy required to achieve escape velocity and a return measured in the future integral of computation gained over the trajectory due to increased energy efficiency. Expendable boost mass in the system can be used, and domino chains of complex chaotic gravitational assist maneuvers computed by deep simulations may offer a route to expel large objects using reasonable amounts of energy.[3]
The Great Game
Given the constraints of known physics (ie no FTL), it appears that the computational brains housing more advanced cold-tech civs will be incredibly vulnerable to hostile aliens. A relativistic kill vehicle is a simple technology that permits little avenue for direct defense. The only strong defense is stealth.
Although the utility functions and ethics of future civs are highly speculative, we can observe that a very large space of utility functions lead to similar convergent instrumental goals involving control over one’s immediate future light cone. If we assume that some civs are essentially selfish, then the dynamics suggest successful strategies will involve stealth and deception to avoid detection combined with deep simulation sleuthing to discover potential alien civs and their locations.
If two civs both discover each other’s locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits. The vast distances involve suggest that one sided discoveries are more likely.
Spheres of Influence
A new civ, upon achieving the early postbiological stage of development (earth in say 2050?), should be able to resolve the general answer to the fermi paradox using advanced deep simulation alone—long before any probes would reach distant stars. Assuming that the answer is “lots of aliens”, then further simulations could be used to estimate the relative likelihood of elder civs interacting with the past lightcone.
The first few civilizations would presumably realize that the galaxy is more likely to be mostly colonized, in which case the ideal strategy probably involves expansion of actuator type devices (probes, construction machines) into nearby systems combined with construction and expulsion of advanced stealthed coldtech brains out into the void. On the other hand, the very nature of the stealth strategy suggests that it may be hard to confidently determine how colonized the galaxy is.
For civilizations appearing later, the situation is more complex. The younger a civ estimates itself to be in the cosmic order, the more likely it becomes that it’s local system has already come under an alien influence.
From the perspective of an elder civ, an alien planet at a pre-singularity level of development has no immediate value. Raw materials are plentiful—and most of the baryonic mass appears to be interstellar and free floating. The tiny relative value of any raw materials on a biological world are probably outweighed—in the long run—by the potential future value of information trade with the resulting mature civ.
Each biological world—or seed of a future elder civ—although perhaps similar in abstract, is unique in details. Each such world is valuable in the potential unique knowledge/insights it may eventually generate—directly or indirectly. From a pure instrumental rational standpoint, there is some value in preserving biological worlds to increase general knowledge of civ development trajectories.
However, there could exist cases where the elder civ may wish to intervene. For example, if deep simulations predict that the younger world will probably develop into something unfriendly—like an aggressive selfish/unfriendly replicator—then small pertubations in the natural trajectory could be called for. In short the elder civ may have reasons to occasionally ‘play god’.
On the other hand, any intervention itself would leave a detectable signature or trace in the historical trajectory which in turn could be detected by another rival or enemy civ! In the best case these clues would only reveal the presence of an alien influence. In the worst case they could reveal information concerning the intervening elder civ’s home system and the likely locations of its key assets.
Around 70,000 years ago, we had a close encounter with Scholz’s star, which passed with 0.8 light years of the sun (within the oort cloud). If the galaxy is well colonized, flybys such as this have potentially interesting implications (that particular flyby corresponds to the estimated time of the Toba super-eruption, for example).
Conditioning on our Observational Data
Over the last few decades SETI has searched a small portion of the parameter space covering potential alien civs.
SETI’s original main focus concerned the detection of large permanent alien radio beacons. We can reasonably rule out models that predict advanced civs constructing high energy omnidirectional radio beacons.
At this point we can also mostly rule out large hot-tech civilizations (energy constrained civilizations) that harvest most of the energy from stars.
Obviously detecting cold-tech civilizations is considerably more difficult, and perhaps close to impossible if advanced stealth is a convergent strategy.
However, determining whether the galaxy as a whole is colonized by advanced stealth civs is a much easier problem. In fact, one way or another the evidence is already right in front of us. We now know that most of the mass in the galaxy is dark rather than light. I have assumed that coldtech still involves baryonic matter and normal physics, but of course there is also the possibility that non-baryonic matter could be used for computation. Either way, the dark matter situation is favorable. Focusing on normal baryonic matter, the ratio of dark/cold to light/hot is still large—very favorable for colonization.
Observational Selection Effects
All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular. A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument. Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect—perhaps massively so.
Rogue Planets
If the galaxy is already colonized by stealthed coldtech civs, then one prediction is that some fraction of the stellar mass has been artificially ejected. Some recent observations actually point—at least weakly—in this direction.
We estimate that there may be up to ∼ 10^5 compact objects in the mass range 10^−8 to 10^−2M⊙
per main sequence star that are unbound to a host star in the Galaxy. We refer to these objects as
nomads; in the literature a subset of these are sometimes called free-floating or rogue planets.
Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.
Assuming the galaxy is colonized: It could be that rogue planets form naturally outside of stars and then are colonized. It could be they form around stars and then are ejected naturally (and colonized). Artificial ejection—even if true—may be a rare event. Or not. But at least a few of these options could potentially be differentiated with future observations—for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.
Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.
Conclusion
SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life. The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.
The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly. Sadly, it’s planned launch date isn’t until 2024. Space development is slow.
Resolving the Fermi Paradox: New Directions
Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets. Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone. Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.
According to Google, there are 20 billion earth like planets in the Galaxy.
A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.
Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data. If we survive and create superintelligence, we will probably do just that.
In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.
The Past
While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.
Here are a few highlight developments from the last few decades relating to an early filter:
The time window between formation of earth and earliest life has been narrowed to a brief interval. Panspermia has also gained ground, with some recent complexity arguments favoring a common origin of life at 9 billion yrs ago.[1]
Discovery of various extremophiles indicate life is robust to a wider range of environments than the norm on earth today.
Advances in neuroscience and studies of animal intelligence lead to the conclusion that the human brain is not nearly as unique as once thought. It is just an ordinary scaled up primate brain, with a cortex enlarged to 4x the size of a chimpanzee. Elephants and some cetaceans have similar cortical neuron counts to the chimpanzee, and demonstrate similar or greater levels of intelligence in terms of rituals, problem solving, tool use, communication, and even understanding rudimentary human language. Elephants, cetaceans, and primates are widely separated lineages, indicating robustness and inevitability in the evolution of intelligence.
The Future(s)
When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past. The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.
An abridged taxonomy of future civilization trajectories :
Collapse/Extinction:
Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years. Given the potential dangers of nanotech/AI/nuclear weapons—and then aliens, I believe this possibility is significant—ie in the 1% to 50% range.
Biological/Mixed Civilization:
This is the old-skool sci-fi scenario. Humans or our biological descendants expand into space. AI is developed but limited to human intelligence, like CP30. No or limited uploading.
This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.
This scenario is almost not worth mentioning: prior < 1%. Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.
PostBiological Warm-tech AI Civilization:
This is Kurzweil/Moravec’s sci-fi scenario. Humans become postbiological, merging with AI through uploading. We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium. This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.
One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency. Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.
Thus to understand the physical form of future civs, we need to understand the physical limits of computation.
One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules. At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit. Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit. Reliable erasure requires some multiple of the minimal expenditure—a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today’s levels of reliability.
Now, the second key consideration is that Landauer’s Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing. Just moving bits around dissipates energy.
Moore’s Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.
I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.
From Warm-tech to Cold-tech
There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.
The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:
There is the obvious immediate gain that one gets from lowering the cost of bit erasures: a bit erasure at room temperature costs 100 times more than a bit erasure at the cosmic background temperature, and a hundred thousand times more than an erasure at 0.01K (the current achievable limit for large objects)
Low temperatures are required for most superconducting materials regardless.
The delicate coherence required for practical quantum computation requires or works best at ultra low temperatures.
Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero. Unfortunately, such a device would be delicate to a degree that is hard to imagine—even a single misplaced high energy particle could cause enormous damage.
Stellar Escape Trajectories
The Great Game
If two civs both discover each other’s locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits. The vast distances involve suggest that one sided discoveries are more likely.
Spheres of Influence
Conditioning on our Observational Data
Observational Selection Effects
All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular. A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument. Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect—perhaps massively so.
Rogue Planets
Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.
Assuming the galaxy is colonized: It could be that rogue planets form naturally outside of stars and then are colonized. It could be they form around stars and then are ejected naturally (and colonized). Artificial ejection—even if true—may be a rare event. Or not. But at least a few of these options could potentially be differentiated with future observations—for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.
Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.
Conclusion
SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life. The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.
The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly. Sadly, it’s planned launch date isn’t until 2024. Space development is slow.