I’ve been meaning to post about the Doomsday Argument for awhile. I have a strong sense that it’s wrong, but I’ve had a hell of a time trying to put my finger on how it fails. The best that I can come up with is as follows: Aumann’s agreement theorem says that two rational agents cannot disagree, in the long run. In particular, two rational agents presented with the same evidence should update their probability distribution in the same direction. Suppose I learn that I am the 50th human, and I am led to conclude that it is far more likely that only 1000 humans should ever live, than 100,000. But suppose I go tell Bob that I’m the 50th human; it would be senseless for him to come to the same conclusion that I have. Formally, it looks something like this:
P(1000 humans|I am human #50)>P(1000 humans)
but
P(1000 humans|Skatche is human #50)=P(1000 humans)
where the right hand sides are the prior probabilities. The same information has been conveyed in each case, yet very different conclusions have been reached. Since this cannot be, I conclude that the Doomsday Argument is mistaken. This could perhaps be adapted as an argument against anthropic reasoning more generally.
it would be senseless for him to come to the same conclusion that I have.
Why do you say that?
Suppose you have an urn containing consecutively numbered balls. But you don’t know how many. Draw one ball from the urn and update your probabilities regarding the number of balls. Draw a second ball, and update again.
Two friends each draw one ball and then share information. I don’t see why the ball you drew yourself should be privileged.
Two variants of this urn problem that may offer some insight into the Doomsday Argument:
The balls are not numbered 1,2,3,… Instead they are labeled “1st generation”, “2nd generation”, … After sampling, estimate the label on the last ball.
The balls are labeled “1 digit”, “2 digits”, “3 digits” … Again, after sampling, estimate the label on the last ball.
I see what you’re saying, but I’m not sure if the analogy applies, since it depends a great deal on the selection process. When I learn that Julius Caesar lived from 100-44BCE, or that Stephen Harper lives in the present day, that certainly doesn’t increase my estimated probability of humans dying out within the next hundred years; and if I lack information about humans yet to be born, that’s not surprising in the slightest, whether or not we go extinct soon.
Really it’s the selection process that’s the issue here; I don’t know how to make sense of the question “Which human should I consider myself most likely to be?” I’ve just never been able to nail down precisely what bothers me about the question.
I’ve been meaning to post about the Doomsday Argument for awhile. I have a strong sense that it’s wrong, but I’ve had a hell of a time trying to put my finger on how it fails. The best that I can come up with is as follows: Aumann’s agreement theorem says that two rational agents cannot disagree, in the long run. In particular, two rational agents presented with the same evidence should update their probability distribution in the same direction. Suppose I learn that I am the 50th human, and I am led to conclude that it is far more likely that only 1000 humans should ever live, than 100,000. But suppose I go tell Bob that I’m the 50th human; it would be senseless for him to come to the same conclusion that I have. Formally, it looks something like this:
P(1000 humans|I am human #50)>P(1000 humans)
but
P(1000 humans|Skatche is human #50)=P(1000 humans)
where the right hand sides are the prior probabilities. The same information has been conveyed in each case, yet very different conclusions have been reached. Since this cannot be, I conclude that the Doomsday Argument is mistaken. This could perhaps be adapted as an argument against anthropic reasoning more generally.
Why do you say that?
Suppose you have an urn containing consecutively numbered balls. But you don’t know how many. Draw one ball from the urn and update your probabilities regarding the number of balls. Draw a second ball, and update again.
Two friends each draw one ball and then share information. I don’t see why the ball you drew yourself should be privileged.
Two variants of this urn problem that may offer some insight into the Doomsday Argument:
The balls are not numbered 1,2,3,… Instead they are labeled “1st generation”, “2nd generation”, … After sampling, estimate the label on the last ball.
The balls are labeled “1 digit”, “2 digits”, “3 digits” … Again, after sampling, estimate the label on the last ball.
I see what you’re saying, but I’m not sure if the analogy applies, since it depends a great deal on the selection process. When I learn that Julius Caesar lived from 100-44BCE, or that Stephen Harper lives in the present day, that certainly doesn’t increase my estimated probability of humans dying out within the next hundred years; and if I lack information about humans yet to be born, that’s not surprising in the slightest, whether or not we go extinct soon.
Really it’s the selection process that’s the issue here; I don’t know how to make sense of the question “Which human should I consider myself most likely to be?” I’ve just never been able to nail down precisely what bothers me about the question.