Possible solution to the Fermi paradox: there is no paradox. The normal approaches find that there should be a very large number of civilizations by plugging point estimates into the Drake Equation, but multiplying point estimates (as opposed to probability distributions) with each other gives you misleading results.
As a toy example, if you multiply nine factors together to get a probability of life per star, each of the factors a random real number drawn uniformly from [0, 0.2] and the point estimate for each being 0.1, then the product of the point estimates is 1 in a billion. This would translate to an expected 100 life-bearing stars, given 100 billion stars. But if you instead combine the probability distributions, you get a median number of 8.7 life-bearing stars (the mean is still 100).
Going through the literature to estimate reasonable prior distributions for different values in the Drake Equation, you get much more pessimistic estimates for the probability of life in the universe; the priors chosen by the authors suggest a 40% a priori chance for life only emerging once. We really might just be alone.
EY wrote that multiplying point estimates are not correct for estimating the probability of success of cryonics. https://www.jefftk.com/p/multiple-stage-fallacy However, it looks like his conclusion is that the total probability of success should be higher than implied by multiplication, not lower as in case of Sanders’ presentation. This may be because in his case most probabilities are above 0.5, so in fact multiplication of the failure probabilities would give lower estimate. That is the probability of cryonic failure is smaller than predicted by multiplication of probabilities of failure on each step.
Nice idea, but I don’t think the cases aren’t mathematically analogous. Eliezer is just talking about multiplying probabilities, not estimates of anything. And he’s saying that that won’t produce the right answer because of human biases, not because it’s mathematically invalid. Whereas in the Drake equation we are multiplying probability distributions for certain parameters (the frequencies at which the various conditions for life occur) and it’s a mathematical fact that the median of the product isn’t the product of the medians.
2) If the early filter is true, part of it may be still active, like the higher intensity of gamma ray bursts, asteroid impacts, or temperature instability of atmosphere. If this is true, we live in the more fragile world, and global warming is higher risk.
3) If any new civilisation is expanding with almost speed of light and is destroying everything on its way, which is the most expected outcome of Alien paperclip maximiser, we could exist only in the regions of the universe, where such event didn’t happen yet. However, if it were very often, we should find ourself surprisingly early. But we are not. Sun is somewhere in the median of all stars which will ever appear. But if we look on the timescale, and expect that the universe will exist trillions of years (no Big Rip), then we are surprisingly early.
But even smaller probability that Rare Earth is not true has higher consequences because if visible aliens exist, it could have enormous consequences for us, as it means either later filter, or a possibility of the contact.
I agree with the conclusion that the Great Filter is more likely behind us than ahead of us. Some explanations of the Fermi Paradox, such as AI disasters or advanced civilizations retreating into virtual worlds, do not seem to fully explain the Fermi Paradox. For AI disasters, for instance, even if an artificial superintelligence destroyed the species that created it, the artificial superintelligence would likely colonize the universe itself. If some civilizations become sufficiently advanced but choose not to colonize for whatever reason, there would likely be at least some civilizations that would.
This analysis remains predicated on the assumption that a long-lasting intelligent system is easily visible over cosmological or galactic distances with the sorts of investigations that have already been performed by us.
EDIT—BTW, there’s a lot of interesting evidence coming out for the ease of abiogenesis, and that thinking of earth’s biosphere evolution in terms of ‘it took 4 gigayears to get to get X, what if thats just rare’ is the wrong way of thinking about things—that you need to talk about geochemical phase transitions rather than accretion of innovations, after which you get explosive changes.
I was talking more about things like the great oxidation (reduced atmosphere and iron in the water to a very little oxygen in the air and hydrogen sulfide in the water) and the proterozoic/phanaerozic transition (low-phosphate oceans with some hydrogen sulfide and low oxygen levels to oxic, high-phosphate very productive oceans and lots of atmospheric oxygen that supports an ozone layer).
The great oxidation is looking like it almost certainly was NOT due to the recent invention of oxygenic photosynthesis, but was instead a geochemical tipping point that came when the slowing geology of the earth and the steady oxidation of crustal sinks could no longer absorb all the biogenic oxygen and the very-small-compared-to-the-crust atmosphere whipped into a new state long after the oxygenic photosynthesizer drivers that ultimately caused it were in place, triggering massive biochemical shifts across the biosphere in a short time.
The proterozoic/phanerozoic transition is looking more and more like it could have been an interesting earth-system-scale flip that had something to do both with a major increase in exposed above-water landmass (coming from the growing continents and steadily thinning oceanic crust causing a sudden shift when the ocean level fell to a level that exposed large plains rather than just mountains) and an intrinsic bistability of ocean chemistry such that there are two stable states, one with low primary productivity/oxygen and one with high, that you can only flip between via some kind of shock. Multicellular animals as we know them may simply not be a viable strategy in the low-productivity low-oxygen state, and predators that can drive evolutionary arms races of the sort that probably drove the Cambiran explosion certainly are not. As such, the late emergence of multicellular heterotrophs on Earth (there is evidence for multicellular photosynthesizers for over a billion years, last I saw) is not necessarily due to them being HARD, but due to the need for the geosphere and the chemical environment to go through some phase transitions first, some driven by slow buildups of material over time and some possibly more stochastic. They show up remarkably fast after those phase transitions are complete.
EDIT: I don’t understand the assertion in the linked slides having to do with abiogenesis that genetic systems that were precursors to ours could’ve been more stable than ours. LUCA had our genetic system, full stop, and is certainly older than 3.7 gigayears at the VERY least, for all we know it could be back to 4.4 gigayears. Our genetic code also bears the imprint of an explosive period of waaaaay pre-LUCA evolution in which it was optimized to be literally one in a million in terms of resistance to mutational damage. What came before LUCA was unstable and fell into a stable state, not the other way around. Furthermore there could be other stable biochemistries, without the need to posit going directly here (though I will go out on a limb and say I suspect protein will be everywhere there is water as a solvent and that genetic polymers are likely to have phosphates, hah).
EDIT 2: Okay now I see what you are referring to about transitions in abiogenesis, treating it as a chemical event with some odds per unit volume per unit time. A reasonable analysis, better than most, but neglecting it as a self-reinforcing PROCESS rather than a singular event. There are other schools of thought, though. There are others who, treating living things as dissipative systems that are a channel through which to discharge persistent chemical disequilibrium and our core biochemistry as being able to do so at a remarkably low level of organization, see abiogenesis as a form of breakdown into the preferred state of a planet out of equliibrium and under chemical stress. The idea being that even though the breakdown is stochastic, it is still the preferred state you are pushing the system into via putting a stress on it. See Dr. Eric Smith for a discussion of the idea from one direction (there is a lot of diversity in ideas on this front):
This analysis remains predicated on the assumption that a long-lasting intelligent system is easily visible over cosmological or galactic distances with the sorts of investigations that have already been performed by us.
No it’s the opposite. If (as they argue) we don’t expect many nearby aliens then it’s irrelevant whether or not we would be able to see them.
The perils of posting quickly in the middle of rapid apartment hunting (for a new postdoc position at a university with a bunch of yeast cell biologists AND astrobiologists! YES!).
I was referring to slide 27, with the various probability distribution graphs conditioned on various observations. The ‘no colonization’ conditional graphs all leave the left low-number tail intact while chopping off the probability bulge to the right of ‘one in our galaxy’ in various different ways. But this is only valid under the assumption that exponential colonization/galactic scale visibility with a few decades of rather poor observations against the screaming burning backdrop of the astrophysical universe is POSSIBLE. (Allow me to preemptively counter the ‘but only one has to be able to’ argument, this is an event that would be extraordinarily correlated across everybody). There are vast numbers of possibilities for the fate of intelligent systems that are not rapid extinction or consuming the universe that are insufficiently explored by so many people.
Without these conditional probability bounds, the given probability distribution is distinctly uninformative. It basically says ‘with the distribution of probabilities that can be extracted from literature on the subject, no intelligent systems in the visible universe is as likely as thousands to a billion in our galaxy’, that little bump on the right side of the distribution is pretty intense). I also happen to think that the given abiogenesis probability distributions are far too wide to the low side, that we have not excluded the possibility of multiple completely independent biospheres in our own solar system at all, and that complex life has some possibility of being limited more by geological/orbital/energetic issues than evolution which introduces interesting bimodality to that probability distribution, but that’s just me (and the people whose work I follow).
My summary:
Possible solution to the Fermi paradox: there is no paradox. The normal approaches find that there should be a very large number of civilizations by plugging point estimates into the Drake Equation, but multiplying point estimates (as opposed to probability distributions) with each other gives you misleading results.
As a toy example, if you multiply nine factors together to get a probability of life per star, each of the factors a random real number drawn uniformly from [0, 0.2] and the point estimate for each being 0.1, then the product of the point estimates is 1 in a billion. This would translate to an expected 100 life-bearing stars, given 100 billion stars. But if you instead combine the probability distributions, you get a median number of 8.7 life-bearing stars (the mean is still 100).
Going through the literature to estimate reasonable prior distributions for different values in the Drake Equation, you get much more pessimistic estimates for the probability of life in the universe; the priors chosen by the authors suggest a 40% a priori chance for life only emerging once. We really might just be alone.
Could we generalise this approach?
EY wrote that multiplying point estimates are not correct for estimating the probability of success of cryonics. https://www.jefftk.com/p/multiple-stage-fallacy However, it looks like his conclusion is that the total probability of success should be higher than implied by multiplication, not lower as in case of Sanders’ presentation. This may be because in his case most probabilities are above 0.5, so in fact multiplication of the failure probabilities would give lower estimate. That is the probability of cryonic failure is smaller than predicted by multiplication of probabilities of failure on each step.
Nice idea, but I don’t think the cases aren’t mathematically analogous. Eliezer is just talking about multiplying probabilities, not estimates of anything. And he’s saying that that won’t produce the right answer because of human biases, not because it’s mathematically invalid. Whereas in the Drake equation we are multiplying probability distributions for certain parameters (the frequencies at which the various conditions for life occur) and it’s a mathematical fact that the median of the product isn’t the product of the medians.
I would be also interested in anthropic updates and utility updates of Fermi paradox.
Anthropic updates: 1) According to Katja Grace, SIA makes more probable that we live in a universe with the later filter. https://www.academia.edu/475444/Anthropic_Reasoning_in_the_Great_Filter
2) If the early filter is true, part of it may be still active, like the higher intensity of gamma ray bursts, asteroid impacts, or temperature instability of atmosphere. If this is true, we live in the more fragile world, and global warming is higher risk.
3) If any new civilisation is expanding with almost speed of light and is destroying everything on its way, which is the most expected outcome of Alien paperclip maximiser, we could exist only in the regions of the universe, where such event didn’t happen yet. However, if it were very often, we should find ourself surprisingly early. But we are not. Sun is somewhere in the median of all stars which will ever appear. But if we look on the timescale, and expect that the universe will exist trillions of years (no Big Rip), then we are surprisingly early.
Utility considerations:
If Rare Earth is true, METI and SETI is useless and thus safe activity. For a long time I wrote that SETI is much more dangerous activity than METI, as we could find alien AI. https://www.academia.edu/30029491/The_Risks_Connected_with_Possibility_of_Finding_Alien_AI_Code_During_SETI
But even smaller probability that Rare Earth is not true has higher consequences because if visible aliens exist, it could have enormous consequences for us, as it means either later filter, or a possibility of the contact.
I agree with the conclusion that the Great Filter is more likely behind us than ahead of us. Some explanations of the Fermi Paradox, such as AI disasters or advanced civilizations retreating into virtual worlds, do not seem to fully explain the Fermi Paradox. For AI disasters, for instance, even if an artificial superintelligence destroyed the species that created it, the artificial superintelligence would likely colonize the universe itself. If some civilizations become sufficiently advanced but choose not to colonize for whatever reason, there would likely be at least some civilizations that would.
This analysis remains predicated on the assumption that a long-lasting intelligent system is easily visible over cosmological or galactic distances with the sorts of investigations that have already been performed by us.
EDIT—BTW, there’s a lot of interesting evidence coming out for the ease of abiogenesis, and that thinking of earth’s biosphere evolution in terms of ‘it took 4 gigayears to get to get X, what if thats just rare’ is the wrong way of thinking about things—that you need to talk about geochemical phase transitions rather than accretion of innovations, after which you get explosive changes.
That’s what they talk about on the abiogenesis slides, right?
I was talking more about things like the great oxidation (reduced atmosphere and iron in the water to a very little oxygen in the air and hydrogen sulfide in the water) and the proterozoic/phanaerozic transition (low-phosphate oceans with some hydrogen sulfide and low oxygen levels to oxic, high-phosphate very productive oceans and lots of atmospheric oxygen that supports an ozone layer).
The great oxidation is looking like it almost certainly was NOT due to the recent invention of oxygenic photosynthesis, but was instead a geochemical tipping point that came when the slowing geology of the earth and the steady oxidation of crustal sinks could no longer absorb all the biogenic oxygen and the very-small-compared-to-the-crust atmosphere whipped into a new state long after the oxygenic photosynthesizer drivers that ultimately caused it were in place, triggering massive biochemical shifts across the biosphere in a short time.
The proterozoic/phanerozoic transition is looking more and more like it could have been an interesting earth-system-scale flip that had something to do both with a major increase in exposed above-water landmass (coming from the growing continents and steadily thinning oceanic crust causing a sudden shift when the ocean level fell to a level that exposed large plains rather than just mountains) and an intrinsic bistability of ocean chemistry such that there are two stable states, one with low primary productivity/oxygen and one with high, that you can only flip between via some kind of shock. Multicellular animals as we know them may simply not be a viable strategy in the low-productivity low-oxygen state, and predators that can drive evolutionary arms races of the sort that probably drove the Cambiran explosion certainly are not. As such, the late emergence of multicellular heterotrophs on Earth (there is evidence for multicellular photosynthesizers for over a billion years, last I saw) is not necessarily due to them being HARD, but due to the need for the geosphere and the chemical environment to go through some phase transitions first, some driven by slow buildups of material over time and some possibly more stochastic. They show up remarkably fast after those phase transitions are complete.
EDIT: I don’t understand the assertion in the linked slides having to do with abiogenesis that genetic systems that were precursors to ours could’ve been more stable than ours. LUCA had our genetic system, full stop, and is certainly older than 3.7 gigayears at the VERY least, for all we know it could be back to 4.4 gigayears. Our genetic code also bears the imprint of an explosive period of waaaaay pre-LUCA evolution in which it was optimized to be literally one in a million in terms of resistance to mutational damage. What came before LUCA was unstable and fell into a stable state, not the other way around. Furthermore there could be other stable biochemistries, without the need to posit going directly here (though I will go out on a limb and say I suspect protein will be everywhere there is water as a solvent and that genetic polymers are likely to have phosphates, hah).
EDIT 2: Okay now I see what you are referring to about transitions in abiogenesis, treating it as a chemical event with some odds per unit volume per unit time. A reasonable analysis, better than most, but neglecting it as a self-reinforcing PROCESS rather than a singular event. There are other schools of thought, though. There are others who, treating living things as dissipative systems that are a channel through which to discharge persistent chemical disequilibrium and our core biochemistry as being able to do so at a remarkably low level of organization, see abiogenesis as a form of breakdown into the preferred state of a planet out of equliibrium and under chemical stress. The idea being that even though the breakdown is stochastic, it is still the preferred state you are pushing the system into via putting a stress on it. See Dr. Eric Smith for a discussion of the idea from one direction (there is a lot of diversity in ideas on this front):
https://www.youtube.com/watch?v=0cwvj0XBKlE
https://www.youtube.com/watch?v=7DfzoBvnM2g
EDITED VIDEOS, wrong but still relevant link earlier
No it’s the opposite. If (as they argue) we don’t expect many nearby aliens then it’s irrelevant whether or not we would be able to see them.
The perils of posting quickly in the middle of rapid apartment hunting (for a new postdoc position at a university with a bunch of yeast cell biologists AND astrobiologists! YES!).
I was referring to slide 27, with the various probability distribution graphs conditioned on various observations. The ‘no colonization’ conditional graphs all leave the left low-number tail intact while chopping off the probability bulge to the right of ‘one in our galaxy’ in various different ways. But this is only valid under the assumption that exponential colonization/galactic scale visibility with a few decades of rather poor observations against the screaming burning backdrop of the astrophysical universe is POSSIBLE. (Allow me to preemptively counter the ‘but only one has to be able to’ argument, this is an event that would be extraordinarily correlated across everybody). There are vast numbers of possibilities for the fate of intelligent systems that are not rapid extinction or consuming the universe that are insufficiently explored by so many people.
Without these conditional probability bounds, the given probability distribution is distinctly uninformative. It basically says ‘with the distribution of probabilities that can be extracted from literature on the subject, no intelligent systems in the visible universe is as likely as thousands to a billion in our galaxy’, that little bump on the right side of the distribution is pretty intense). I also happen to think that the given abiogenesis probability distributions are far too wide to the low side, that we have not excluded the possibility of multiple completely independent biospheres in our own solar system at all, and that complex life has some possibility of being limited more by geological/orbital/energetic issues than evolution which introduces interesting bimodality to that probability distribution, but that’s just me (and the people whose work I follow).