Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.
By the Central Limit Theorem, as the number of such terms and sub-terms increases, this prior approaches a log-normal distribution; that is, if you take the inverse (proportional to the amount of work we’d expect to have to do to find the first extraterrestrial civilization), the mean much higher than the median, dominated by a long upper tail. That point applies not just to the prior, but to the posterior after conditioning on evidence. (In fact, as we come to have less uncertainty about the basic structure of the Drake-type equation — which terms it comprises, even though we may still have substantial uncertainty about the values of those terms — the argument that the posterior must be approximately log-normal only grows stronger than it was for the prior.)
In this situation, given the substantial initial uncertainty about the value of the terms associated with steps that have already happened, the evidence we can draw from the Great Silence about any steps in the future is very, very weak.
As a statistics PhD, experienced professionally with Bayesian inference, my confidence on the above is pretty high. That is, I would be willing to bet on this at basically any odds, as long as the potential payoff was high enough to compensate me for the time it would take to do due diligence on the bet (that is, make sure I wasn’t going to get “cider in my ear”, as Sky Masterson says). That’s not to say that I’d bet strongly against any future “Great Filter”; I’d just bet strongly against the idea that a sufficiently well-informed observer would conclude, post-hoc, that the bullet point above about the “great filter” was at all well-justified based on the evidence implicitly cited.
True, the typical argument for the great silence implying a late filter is weak, because an early filter is not all that a priori implausible.
However, the OP (Katja Grace) specifically mentioned “anthropic reasoning”.
As she previously pointed out, an early filter makes our present existence much less probable than a late filter. So, given our current experience , we should weight the probability of a late filter much higher than the prior would be without anthropic considerations.
Thanks for pointing that out. My arguments above do not apply.
I’m still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, “we live on a planet orbiting a G2V-type star”, “we inhabit a universe that appears to run on quantum mechanics”), but not in cases where each observation is unique (eg, “it’s the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever”). I am far less confident of this than I stated for the arguments above, but I’m still reasonably confident, and my expertise does still apply (I’ve thought about it more than just what you see here).
This could mean you would also have to reject thirding in the famous Sleeping Beauty problem. Which contradicts a straightforward frequentist interpretation of the setup: If the SB experiment was repeated many times, one third of the awakenings would be Monday Heads, so if SB was guessing after awakening “the coin came up heads” she would be right with frequentist probability 1⁄3.
Of course there are possible responses to this. My point is just that: rejecting Katja’s doomsday argument by rejecting SIA style anthropic reasoning may come with implausible consequences in other areas.
Strongly disagree about the “great filter” point.
Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.
By the Central Limit Theorem, as the number of such terms and sub-terms increases, this prior approaches a log-normal distribution; that is, if you take the inverse (proportional to the amount of work we’d expect to have to do to find the first extraterrestrial civilization), the mean much higher than the median, dominated by a long upper tail. That point applies not just to the prior, but to the posterior after conditioning on evidence. (In fact, as we come to have less uncertainty about the basic structure of the Drake-type equation — which terms it comprises, even though we may still have substantial uncertainty about the values of those terms — the argument that the posterior must be approximately log-normal only grows stronger than it was for the prior.)
In this situation, given the substantial initial uncertainty about the value of the terms associated with steps that have already happened, the evidence we can draw from the Great Silence about any steps in the future is very, very weak.
As a statistics PhD, experienced professionally with Bayesian inference, my confidence on the above is pretty high. That is, I would be willing to bet on this at basically any odds, as long as the potential payoff was high enough to compensate me for the time it would take to do due diligence on the bet (that is, make sure I wasn’t going to get “cider in my ear”, as Sky Masterson says). That’s not to say that I’d bet strongly against any future “Great Filter”; I’d just bet strongly against the idea that a sufficiently well-informed observer would conclude, post-hoc, that the bullet point above about the “great filter” was at all well-justified based on the evidence implicitly cited.
True, the typical argument for the great silence implying a late filter is weak, because an early filter is not all that a priori implausible.
However, the OP (Katja Grace) specifically mentioned “anthropic reasoning”.
As she previously pointed out, an early filter makes our present existence much less probable than a late filter. So, given our current experience , we should weight the probability of a late filter much higher than the prior would be without anthropic considerations.
Thanks for pointing that out. My arguments above do not apply.
I’m still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, “we live on a planet orbiting a G2V-type star”, “we inhabit a universe that appears to run on quantum mechanics”), but not in cases where each observation is unique (eg, “it’s the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever”). I am far less confident of this than I stated for the arguments above, but I’m still reasonably confident, and my expertise does still apply (I’ve thought about it more than just what you see here).
This could mean you would also have to reject thirding in the famous Sleeping Beauty problem. Which contradicts a straightforward frequentist interpretation of the setup: If the SB experiment was repeated many times, one third of the awakenings would be Monday Heads, so if SB was guessing after awakening “the coin came up heads” she would be right with frequentist probability 1⁄3.
Of course there are possible responses to this. My point is just that: rejecting Katja’s doomsday argument by rejecting SIA style anthropic reasoning may come with implausible consequences in other areas.