My view on the “Fermi paradox” is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.
I don’t think that the Great Filter implies only one filter, but I think that if you’re multiplying several numbers together and they come out to at least 10^10, it’s likely that at least one of the numbers is big. (And if one of the numbers is big, that makes it less necessary for the other numbers to be big.)
Put another way, it seems more likely to me that there is one component filter of size 10^6 than two component filters each of size 10^3, both of which seem much more likely than that there are 20 component filters of size 2.
I don’t see why it’s likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it’s very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.
If you’ve a long ridge to climb in a limited time and most people fail to do it, it’s not very likely there is a very specific part of it which is very hard, but (unless you’ve actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.
Or if you’ve a complicated project that takes 4x longer than expected to be done, it’s much less likely that there was a single big difficulty you didn’t foresee than many small-to-moderate difficulties you didn’t foresee stacking on top of each other. The planning fallacy isn’t usually due to black swans, but to accumulating smaller factors. It’s the same here.
I don’t see why it’s likely one of the numbers has to be big.
This is a statement about my priors on the number of filters and the size of a filter, and I’m not sure I can shortly communicate why I have that prior. Maybe it’s a statement on conceptual clumpiness.
If you’ve a long ridge to climb in a limited time and most people fail to do it, it’s not very likely there is a very specific part of it which is very hard, but (unless you’ve actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.
To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don’t know what the distribution of life on planets looks like, so we can’t settle that argument.
Similarly, we know about the planning fallacy because we make many plans and complete many projects—if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the “hard” components. Hanson wrote a paper on this in 1996 that doesn’t appear to be on his website anymore, but it’s a straightforward integration given exponential distributions over time to completion, with ‘hardness’ determining the rate parameter, and conditioning on early success.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should take the effort to formalize that argument.
Another way of thinking about the filter/steps is as a continuous developmental trajectory. We have a reasonable good idea of one sample trajectory—the history of our solar system—and we want to determine if this particular civilization-bearing subspace we are in is like the main sequence or more like a tightrope.
If the development stages have lots of conjuctive/multiplicative dependencies (for example: early life requires a terrestrial planet in the habitable zone with the right settings for various parameters), then a lognormal distribution might be a good fit. This seems reasonable, and the lognormal of course is extremely heavy tailed.
On the other hand, one problem with this is that seeing a single trajectory example doesn’t give one much evidence for any disjunctive/additive components in the distribution. These would be any independent alternate developmental pathways which could bypass the specific developmental chokepoints we see in our single example history.
I don’t think that the Great Filter implies only one filter, but I think that if you’re multiplying several numbers together and they come out to at least 10^10, it’s likely that at least one of the numbers is big. (And if one of the numbers is big, that makes it less necessary for the other numbers to be big.)
Put another way, it seems more likely to me that there is one component filter of size 10^6 than two component filters each of size 10^3, both of which seem much more likely than that there are 20 component filters of size 2.
I don’t see why it’s likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it’s very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.
If you’ve a long ridge to climb in a limited time and most people fail to do it, it’s not very likely there is a very specific part of it which is very hard, but (unless you’ve actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.
Or if you’ve a complicated project that takes 4x longer than expected to be done, it’s much less likely that there was a single big difficulty you didn’t foresee than many small-to-moderate difficulties you didn’t foresee stacking on top of each other. The planning fallacy isn’t usually due to black swans, but to accumulating smaller factors. It’s the same here.
This is a statement about my priors on the number of filters and the size of a filter, and I’m not sure I can shortly communicate why I have that prior. Maybe it’s a statement on conceptual clumpiness.
To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don’t know what the distribution of life on planets looks like, so we can’t settle that argument.
Similarly, we know about the planning fallacy because we make many plans and complete many projects—if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the “hard” components. Hanson wrote a paper on this in 1996 that doesn’t appear to be on his website anymore, but it’s a straightforward integration given exponential distributions over time to completion, with ‘hardness’ determining the rate parameter, and conditioning on early success.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.
This Hanson paper?
Yep; for some reason the links I found all point at a .ps file that no longer exists.
Another way of thinking about the filter/steps is as a continuous developmental trajectory. We have a reasonable good idea of one sample trajectory—the history of our solar system—and we want to determine if this particular civilization-bearing subspace we are in is like the main sequence or more like a tightrope.
If the development stages have lots of conjuctive/multiplicative dependencies (for example: early life requires a terrestrial planet in the habitable zone with the right settings for various parameters), then a lognormal distribution might be a good fit. This seems reasonable, and the lognormal of course is extremely heavy tailed.
On the other hand, one problem with this is that seeing a single trajectory example doesn’t give one much evidence for any disjunctive/additive components in the distribution. These would be any independent alternate developmental pathways which could bypass the specific developmental chokepoints we see in our single example history.