This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations: you’re assuming we know this model with infinite certainty, and restricting maximum populations to ludicrously low levels. But in reality we’ll also have uncertainty about the model, e.g. whether life is unlikely or not, and populations could be immense. If we assign even a little weight to those other models with likely life then SIA will strongly update us towards them.
The example in this post is similar to saying “assume a fair coin, which comes up Heads for its first trillion flips; what is the probability that the next flip will be Heads?” Yes, given the wacky assumption of infinite certainty in the fair coin model the probability for the next flip is 0.5, but in fact one should assign some prior credence to other models, and the trillion-Heads streak should give a strong update towards them.
This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations
Agreed. I’m not making much of a point here, just that some models make little distinction between SIA and SSA—this may be relevant, for instance, to the presumptuous philosopher. If presumptuous philosophers are unlikely, then Anthropic Decision Theory may push even selfless philosophers towards SSA.
This is kind of irrelevant to normal applications of SIA to estimates of the frequency of civilizations: you’re assuming we know this model with infinite certainty, and restricting maximum populations to ludicrously low levels. But in reality we’ll also have uncertainty about the model, e.g. whether life is unlikely or not, and populations could be immense. If we assign even a little weight to those other models with likely life then SIA will strongly update us towards them.
The example in this post is similar to saying “assume a fair coin, which comes up Heads for its first trillion flips; what is the probability that the next flip will be Heads?” Yes, given the wacky assumption of infinite certainty in the fair coin model the probability for the next flip is 0.5, but in fact one should assign some prior credence to other models, and the trillion-Heads streak should give a strong update towards them.
Agreed. I’m not making much of a point here, just that some models make little distinction between SIA and SSA—this may be relevant, for instance, to the presumptuous philosopher. If presumptuous philosophers are unlikely, then Anthropic Decision Theory may push even selfless philosophers towards SSA.