I don’t see why it’s likely one of the numbers has to be big.
This is a statement about my priors on the number of filters and the size of a filter, and I’m not sure I can shortly communicate why I have that prior. Maybe it’s a statement on conceptual clumpiness.
If you’ve a long ridge to climb in a limited time and most people fail to do it, it’s not very likely there is a very specific part of it which is very hard, but (unless you’ve actual data that most people fail at the same place) more likely that are lots of moderately difficult parts and few people succeed in all of them in time.
To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don’t know what the distribution of life on planets looks like, so we can’t settle that argument.
Similarly, we know about the planning fallacy because we make many plans and complete many projects—if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the “hard” components. Hanson wrote a paper on this in 1996 that doesn’t appear to be on his website anymore, but it’s a straightforward integration given exponential distributions over time to completion, with ‘hardness’ determining the rate parameter, and conditioning on early success.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should take the effort to formalize that argument.
Another way of thinking about the filter/steps is as a continuous developmental trajectory. We have a reasonable good idea of one sample trajectory—the history of our solar system—and we want to determine if this particular civilization-bearing subspace we are in is like the main sequence or more like a tightrope.
If the development stages have lots of conjuctive/multiplicative dependencies (for example: early life requires a terrestrial planet in the habitable zone with the right settings for various parameters), then a lognormal distribution might be a good fit. This seems reasonable, and the lognormal of course is extremely heavy tailed.
On the other hand, one problem with this is that seeing a single trajectory example doesn’t give one much evidence for any disjunctive/additive components in the distribution. These would be any independent alternate developmental pathways which could bypass the specific developmental chokepoints we see in our single example history.
This is a statement about my priors on the number of filters and the size of a filter, and I’m not sure I can shortly communicate why I have that prior. Maybe it’s a statement on conceptual clumpiness.
To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don’t know what the distribution of life on planets looks like, so we can’t settle that argument.
Similarly, we know about the planning fallacy because we make many plans and complete many projects—if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the “hard” components. Hanson wrote a paper on this in 1996 that doesn’t appear to be on his website anymore, but it’s a straightforward integration given exponential distributions over time to completion, with ‘hardness’ determining the rate parameter, and conditioning on early success.
I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you’ll get that it’s more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.
This Hanson paper?
Yep; for some reason the links I found all point at a .ps file that no longer exists.
Another way of thinking about the filter/steps is as a continuous developmental trajectory. We have a reasonable good idea of one sample trajectory—the history of our solar system—and we want to determine if this particular civilization-bearing subspace we are in is like the main sequence or more like a tightrope.
If the development stages have lots of conjuctive/multiplicative dependencies (for example: early life requires a terrestrial planet in the habitable zone with the right settings for various parameters), then a lognormal distribution might be a good fit. This seems reasonable, and the lognormal of course is extremely heavy tailed.
On the other hand, one problem with this is that seeing a single trajectory example doesn’t give one much evidence for any disjunctive/additive components in the distribution. These would be any independent alternate developmental pathways which could bypass the specific developmental chokepoints we see in our single example history.