There’s a reference class (people in my age bracket in my country)
Everyone in the reference class is doing a thing (living normal human lives)
Something happens with blah% probability to everyone in the reference class (dying this year)
Since I am a member of the reference class too, I should consider that maybe that thing will happen to me with blah% probability as well (my risk of death this year is blah%).
OK, now aliens.
There’s a reference class (alien civilizations from the past present and future)
Everyone in the reference class is doing a thing (doing analyses how their civilization compares with the distribution of all alien civilizations from the past present and future)
Something happens with 99.9% probability to everyone in the reference class (the alien civilization doing this analysis finds that they are somewhere in the middle 99.9% of the distribution, in terms of civilization origination date).
Since I am a member of the reference class too, I should consider that maybe that thing will happen to me with 99.9% probability too.
…So if a purported distribution of alien civilizations has me in the first 0.05%, I should treat it like a hypothesis that made a 99.9% confident prediction and the prediction was wrong, and I should reduce my credence on that hypothesis accordingly.
Is that right?
If so, I find the first argument convincing but not the second one. They don’t seem sufficiently parallel to me.
For example, in the first argument, I want to make the reference class as much like me as possible, up to the limits of the bias-variance tradeoff. But in the second case, if I update the reference class to “alien civilizations from the first 14 billion years of the universe”, then it destroys the argument, we’re not early anymore. Or is that cheating? An issue here is that the first argument is a prediction and the second is a postdiction, and in a postdiction it seems like cheating to condition on the the thing you’re trying to postdict. But that disanalogy is fixable: I’ll turn the first argument into a postdiction by saying that I’m trying to estimate “my chances of having died in the past year”—i.e., I know in hindsight that I didn’t die, but was it because I got lucky, or was that to be expected?
So then I’m thinking: what does it even mean to produce a “correct” probability distribution for a postdiction? Why wouldn’t I update on all the information I have (including the fact that I didn’t die) and say “well gee I guess my chance of dying last year was 0%!”? Well, in this chance-of-dying case, I have a great way to ground out what I’m trying to do here: I can talk about “re-running the past year with a new (metaphorical) “seed” on all the quantum random number generators”. Then the butterfly effect would lead to a distribution of different-but-plausible rollouts of the past year, and we can try to reason about what would have happened. It’s still a hard problem, but at least there’s a concrete thing that we’re trying to figure out.
But there seems to be no analogous way in the alien case to ground out what a “correct” postdictive probability distribution is, because like I said above (with the Pixar movie comment), there was no actual random mechanism involved in the fact that I am a human rather than a member of a future alien civilization that will evolve a trillion years from now in a different galaxy. (We can talk about re-rolling the history of earth, but it seems to me that this kind of analysis would look like normal common-sense reasoning about hard steps in evolution etc., and not weird retrocausal inferences like “if earth-originating life is likely to prevent later civilizations from arising, then that provides such-and-such evidence about whether and when intelligent life would arise in a re-roll of earth’s history”.)
I can also point out various seemingly-arbitrary choices involved in the alien-related reference class, like whether we should put more weight on the civilizations that have higher population, or higher-population-of-people-doing-anthropic-probability-analyses, and whether non-early aliens are less likely to be thinking about how early they are (as opposed to some other interesting aspect of their situation … do we need a Bonferroni correction to account for all the possible ways that an intelligent civilization could appear atypical??), etc. Also, I feel like there’s also a kind of reductio where this argument is awfully similar in structure to the doomsday argument, so that I find it hard to imagine how one could accept the grabby aliens argument but reject the doomsday argument, but they’re kinda contradictory I think. It makes me skeptical of this whole enterprise. :-/ But again I’ll emphasize that I’m just thinking this through out loud and I’m open to being convinced. :-)
For the purposes of this discussion it’s probably easier to get rid of the “alien” bit and just talk about humans. For instance, consider this thought experiment (discussed here by Bostrom):
A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn you’re one of the humans in question. You don’t know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. . . . [Y]ou mustn’t say: ‘My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large.’ (Ibid. pp. 222–3)
I’m curious about whether you agree or disagree with the reasoning here (and more generally with the rest of Bostrom’s reasoning in the chapter I linked).
To respond to your points more specifically: I don’t think your attempted analogy is correct; here’s the replacement I’d use:
Consider the group of all possible alien civilisations (respectively: all living humans).
Everyone in the reference class is either in a grabby universe or not (respectively: is either going to die this year or not).
Those who are in grabby universes are more likely to be early (respectively: those who will survive this year are more likely to be young).
When you observe how early you are, you should think you’re more likely to be in a grabby universe (respectively: when you observe how young you are, you should update that you’re more likely to survive).
First and foremost, I haven’t thought about it very much :)
I admit, the Bostrom book arguments you cite do seem intuitively compelling. Also, if Bostrom and lots of other reasonable people think SSA is sound, I guess I’m somewhat reluctant to disagree.
(However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?)
On the other hand, I find the following argument equally compelling:
Alice walks up to me and says, “Y’know, I was just reading, it turns out that local officials in China have weird incentives related to population reporting, and they’ve been cooking the books for years. It turns out that the real population of China is more like 1.9B than 1.4B!” I would have various good reasons to believe Alice here, and various other good reasons to disbelieve Alice. But “The fact that I am not Chinese” does not seem like a valid reason to disbelieve Alice!!
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else. And ditto with the grabby aliens argument versus “everything humanity knows pertaining to astrobiology”. Maybe these anthropic-argument thought experiments are getting a lot of mileage out of the fact that there’s no other information whatsoever to go on, and so we need to cling for dear life to any thread of evidence we can find, and maybe that’s just not the usual situation for thinking about things, given that we do in fact know the laws of physics and so on. (I don’t know if that argument holds up to scrutiny, it’s just something that occurred to me just now.) :-)
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else.
Thanks for making this point and connecting it to that post. I’ve been thinking that something like this might be the right way to think about a lot of this anthropics stuff — yes, we should use anthropic reasoning to inform our priors, but also we shouldn’t be afraid to update on all the detailed data we do have. (And some examples of anthropics-informed reasoning seem not to do enough of that updating.)
On the other hand, I find the following argument equally compelling
The argument you discuss an example of very weak anthropic evidence, so I don’t think it’s a good intuition pump about the validity of anthropic reasoning in general. In general anthropic evidence can be quite strong—the presumptuous philosopher thought experiment, for instance, argues for an update of a trillion to one.
However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?
I think there’s a terminological confusion here. People sometimes talk about SSA vs SIA, but in Bostrom’s terminology the two options for anthropic reasoning are SSA + SIA, or SSA + not-SIA. So in Bostrom’s terminology, every time you’re doing anthropic reasoning, you’re accepting SSA; and the main reason I linked his chapter was just to provide intuitions about why anthropic reasoning is valuable, not as an argument against SIA. (In fact, the example I quoted above has the same outcome regardless of whether you accept or reject SIA, because the population size is fixed.)
I don’t know whether Hanson is using SIA or not; the previous person who’s done similar work tried both possibilities. But either would be fine, because anthropic reasoning has basically been solved by UDT, in a way which dissolves the question of whether or not to accept SIA—as explained by Stuart Armstrong here.
Thanks, that helps! Here’s where I’m at now:
The “chance of dying” argument goes:
There’s a reference class (people in my age bracket in my country)
Everyone in the reference class is doing a thing (living normal human lives)
Something happens with blah% probability to everyone in the reference class (dying this year)
Since I am a member of the reference class too, I should consider that maybe that thing will happen to me with blah% probability as well (my risk of death this year is blah%).
OK, now aliens.
There’s a reference class (alien civilizations from the past present and future)
Everyone in the reference class is doing a thing (doing analyses how their civilization compares with the distribution of all alien civilizations from the past present and future)
Something happens with 99.9% probability to everyone in the reference class (the alien civilization doing this analysis finds that they are somewhere in the middle 99.9% of the distribution, in terms of civilization origination date).
Since I am a member of the reference class too, I should consider that maybe that thing will happen to me with 99.9% probability too.
…So if a purported distribution of alien civilizations has me in the first 0.05%, I should treat it like a hypothesis that made a 99.9% confident prediction and the prediction was wrong, and I should reduce my credence on that hypothesis accordingly.
Is that right?
If so, I find the first argument convincing but not the second one. They don’t seem sufficiently parallel to me.
For example, in the first argument, I want to make the reference class as much like me as possible, up to the limits of the bias-variance tradeoff. But in the second case, if I update the reference class to “alien civilizations from the first 14 billion years of the universe”, then it destroys the argument, we’re not early anymore. Or is that cheating? An issue here is that the first argument is a prediction and the second is a postdiction, and in a postdiction it seems like cheating to condition on the the thing you’re trying to postdict. But that disanalogy is fixable: I’ll turn the first argument into a postdiction by saying that I’m trying to estimate “my chances of having died in the past year”—i.e., I know in hindsight that I didn’t die, but was it because I got lucky, or was that to be expected?
So then I’m thinking: what does it even mean to produce a “correct” probability distribution for a postdiction? Why wouldn’t I update on all the information I have (including the fact that I didn’t die) and say “well gee I guess my chance of dying last year was 0%!”? Well, in this chance-of-dying case, I have a great way to ground out what I’m trying to do here: I can talk about “re-running the past year with a new (metaphorical) “seed” on all the quantum random number generators”. Then the butterfly effect would lead to a distribution of different-but-plausible rollouts of the past year, and we can try to reason about what would have happened. It’s still a hard problem, but at least there’s a concrete thing that we’re trying to figure out.
But there seems to be no analogous way in the alien case to ground out what a “correct” postdictive probability distribution is, because like I said above (with the Pixar movie comment), there was no actual random mechanism involved in the fact that I am a human rather than a member of a future alien civilization that will evolve a trillion years from now in a different galaxy. (We can talk about re-rolling the history of earth, but it seems to me that this kind of analysis would look like normal common-sense reasoning about hard steps in evolution etc., and not weird retrocausal inferences like “if earth-originating life is likely to prevent later civilizations from arising, then that provides such-and-such evidence about whether and when intelligent life would arise in a re-roll of earth’s history”.)
I can also point out various seemingly-arbitrary choices involved in the alien-related reference class, like whether we should put more weight on the civilizations that have higher population, or higher-population-of-people-doing-anthropic-probability-analyses, and whether non-early aliens are less likely to be thinking about how early they are (as opposed to some other interesting aspect of their situation … do we need a Bonferroni correction to account for all the possible ways that an intelligent civilization could appear atypical??), etc. Also, I feel like there’s also a kind of reductio where this argument is awfully similar in structure to the doomsday argument, so that I find it hard to imagine how one could accept the grabby aliens argument but reject the doomsday argument, but they’re kinda contradictory I think. It makes me skeptical of this whole enterprise. :-/ But again I’ll emphasize that I’m just thinking this through out loud and I’m open to being convinced. :-)
For the purposes of this discussion it’s probably easier to get rid of the “alien” bit and just talk about humans. For instance, consider this thought experiment (discussed here by Bostrom):
I’m curious about whether you agree or disagree with the reasoning here (and more generally with the rest of Bostrom’s reasoning in the chapter I linked).
To respond to your points more specifically: I don’t think your attempted analogy is correct; here’s the replacement I’d use:
Consider the group of all possible alien civilisations (respectively: all living humans).
Everyone in the reference class is either in a grabby universe or not (respectively: is either going to die this year or not).
Those who are in grabby universes are more likely to be early (respectively: those who will survive this year are more likely to be young).
When you observe how early you are, you should think you’re more likely to be in a grabby universe (respectively: when you observe how young you are, you should update that you’re more likely to survive).
Thanks!!
First and foremost, I haven’t thought about it very much :)
I admit, the Bostrom book arguments you cite do seem intuitively compelling. Also, if Bostrom and lots of other reasonable people think SSA is sound, I guess I’m somewhat reluctant to disagree.
(However, I thought the “grabby aliens” argument was NOT based on SSA, and in fact is counter to SSA, because they’re not weighing the alien civilizations by their total populations?)
On the other hand, I find the following argument equally compelling:
Maybe here’s a compromise position: Strong evidence is common. I am in possession of probably millions of bits of information pertaining to x-risks and the future of humanity, and then the Doomsday Argument provides, like, 10 additional bits of information beyond that. It’s not that the argument is wrong, it’s just that it’s an infinitesimally weak piece of evidence compared to everything else. And ditto with the grabby aliens argument versus “everything humanity knows pertaining to astrobiology”. Maybe these anthropic-argument thought experiments are getting a lot of mileage out of the fact that there’s no other information whatsoever to go on, and so we need to cling for dear life to any thread of evidence we can find, and maybe that’s just not the usual situation for thinking about things, given that we do in fact know the laws of physics and so on. (I don’t know if that argument holds up to scrutiny, it’s just something that occurred to me just now.) :-)
Thanks for making this point and connecting it to that post. I’ve been thinking that something like this might be the right way to think about a lot of this anthropics stuff — yes, we should use anthropic reasoning to inform our priors, but also we shouldn’t be afraid to update on all the detailed data we do have. (And some examples of anthropics-informed reasoning seem not to do enough of that updating.)
FWIW this has also been my suspicion for a while.
The argument you discuss an example of very weak anthropic evidence, so I don’t think it’s a good intuition pump about the validity of anthropic reasoning in general. In general anthropic evidence can be quite strong—the presumptuous philosopher thought experiment, for instance, argues for an update of a trillion to one.
I think there’s a terminological confusion here. People sometimes talk about SSA vs SIA, but in Bostrom’s terminology the two options for anthropic reasoning are SSA + SIA, or SSA + not-SIA. So in Bostrom’s terminology, every time you’re doing anthropic reasoning, you’re accepting SSA; and the main reason I linked his chapter was just to provide intuitions about why anthropic reasoning is valuable, not as an argument against SIA. (In fact, the example I quoted above has the same outcome regardless of whether you accept or reject SIA, because the population size is fixed.)
I don’t know whether Hanson is using SIA or not; the previous person who’s done similar work tried both possibilities. But either would be fine, because anthropic reasoning has basically been solved by UDT, in a way which dissolves the question of whether or not to accept SIA—as explained by Stuart Armstrong here.