The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.
I don’t see the problems with the reference class, as I use the following conjecture: “Each reference class has its own end” and also the idea of “natural reference class” (similar to “the same computational process” in TDT):
“I am randomly selected from all, who thinks about Doomsday argument”. Natural reference class gives most sad predictions, as the number of people who know about DA is growing from 1983, and it implies the end soon, maybe in couple decades.
Predictive power is probabilistic here and not much differ from other probabilistic prediction we could have.
Backward causation is the most difficult part here, but I can’t imagine now any practical example for our world.
PS: I think it is clear what do I mean by “Each reference class has its own end” but some examples may be useful. For example, I have 1000 rank in all who knows DA, but 90 billions rank from all humans. In first case, DA claims that there will be around 1000 more people who know about DA, and in the second that there will be around 90 billion more humans. These claims do not contradict each other as they are probabilistic assessments with very high margin. Both predictions mean extinction in next decades or centuries. That is, changes in reference class don’t change the final conclusion of DA that extinction is soon.
The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.
I don’t see the problems with the reference class, as I use the following conjecture: “Each reference class has its own end” and also the idea of “natural reference class” (similar to “the same computational process” in TDT): “I am randomly selected from all, who thinks about Doomsday argument”. Natural reference class gives most sad predictions, as the number of people who know about DA is growing from 1983, and it implies the end soon, maybe in couple decades.
Predictive power is probabilistic here and not much differ from other probabilistic prediction we could have.
Backward causation is the most difficult part here, but I can’t imagine now any practical example for our world.
PS: I think it is clear what do I mean by “Each reference class has its own end” but some examples may be useful. For example, I have 1000 rank in all who knows DA, but 90 billions rank from all humans. In first case, DA claims that there will be around 1000 more people who know about DA, and in the second that there will be around 90 billion more humans. These claims do not contradict each other as they are probabilistic assessments with very high margin. Both predictions mean extinction in next decades or centuries. That is, changes in reference class don’t change the final conclusion of DA that extinction is soon.