In the Laplace’s sunrise problem the question is: what are the chances that Sun will rise again after it has raised 5000 previous day. Let’s reframe the problem: what are the chances that a catastrophe will not happen in the year 5001 given that it didn’t happen in the previous 5000 years. Laplace gives chances of no catastrophe as 1 − 1/(5000 +2). ’+2″ appears here because we are a) speaking about discrete events, and the next year is 5001.
So simplifying we get that Laplace gives 1/(5000) chances of catastrophe for the next year after 5000 year of no-catastrophe.
If we take Gott’s equation for Doomsday argument, it also gives probability of catastrophe 1/(5000) for the situation when I survived 5000 years without a catastrophe BUT was randomly selected from that period. Laplace and Gott achieved basically the same equation but using different methods.
I do not see Laplace’s problem as problematic, it is another version of Doomsday argument and both are correct. But is shows us that “random sampling’ is not a necessary condition of for having Doomsday argument.
Reading this conversation a year later, I can’t help but wonder what @Ape in the coat thinks about this. If a completely unrelated argument (Laplace’s sunrise) confirms the results of another argument (Doomsday argument), then that seems to increase the likelihood that the conclusion of Doomsday argument, and thus the argument itself, is right. Do you not agree, Ape in the coat? Or do you think that Laplace’s sunrise argument is also wrong?
Not much. I have initially considered this thread “not worth getting into” as @avturchin’s line of reasoning is based on multiple different small confusions, addressing each of which would be a huge chore and is only tangetially relevant to the topic of the post, in the first place. I agree with this assessment today. But I will present the general outline of what is wrong with it for you and the future readers.
First of all, Gott’s version of DA is different from the version of DA, I’m talking about in this post. Its a different mathematical model, that is based on the number of years humanity exists, instead of number of humans and returns a different estimate for extinction: 97.5% confidence for extinction in the next 8 million years, assuming that humanity existed for 200000 years, regardless of birthrates. Suffice to say, these two version of DA produce different predictions, and by shifting some free parameters in the models we can get even more different predictions still. This is completely expected if DA arguments are wrong.
Likewise Laplace sunrise is yet another mathematical model and a certain interpretation of it produces vaguely similar result to Gott’s version of DA. Assuming LS being applicable, this isn’t really an argument in favor of GDA or by kind of anthropic reasoning. Imagine if the correct answer to a test question is 1⁄5002, while your reasoning, which makes an extra assumption, produces an answer 1⁄5000. Clearly, it doesn’t mean that your reasoning is correct, nor justifies the extra assumption.
And then there is a whole different question of applicability of LS to the situation at hand. Which also doesn’t fully capture our knowledge state, but at least it’s less wrong, in a sense, as it doesn’t make the particular mistake which I’m talking about in this post.
I think there is a subtle difference between DA and Laplace:
Laplace predicts the “minimal probability”: there is at least 4999 / 5000 chance that the Sun will rise tomorrow humanity will not extinct next year . DA predicts necessity: there is 1 in 5000 chance that humanity will extinct next year.
But Laplace doesn’t predict that humanity will inevitably extinct at the age 10 times longer than it has now. DA instead predicts that the chances to survive to such age are 10 per cent.
In the Laplace’s sunrise problem the question is: what are the chances that Sun will rise again after it has raised 5000 previous day. Let’s reframe the problem: what are the chances that a catastrophe will not happen in the year 5001 given that it didn’t happen in the previous 5000 years. Laplace gives chances of no catastrophe as 1 − 1/(5000 +2). ’+2″ appears here because we are a) speaking about discrete events, and the next year is 5001.
So simplifying we get that Laplace gives 1/(5000) chances of catastrophe for the next year after 5000 year of no-catastrophe.
If we take Gott’s equation for Doomsday argument, it also gives probability of catastrophe 1/(5000) for the situation when I survived 5000 years without a catastrophe BUT was randomly selected from that period. Laplace and Gott achieved basically the same equation but using different methods.
I do not see Laplace’s problem as problematic, it is another version of Doomsday argument and both are correct. But is shows us that “random sampling’ is not a necessary condition of for having Doomsday argument.
Reading this conversation a year later, I can’t help but wonder what @Ape in the coat thinks about this. If a completely unrelated argument (Laplace’s sunrise) confirms the results of another argument (Doomsday argument), then that seems to increase the likelihood that the conclusion of Doomsday argument, and thus the argument itself, is right. Do you not agree, Ape in the coat? Or do you think that Laplace’s sunrise argument is also wrong?
Not much. I have initially considered this thread “not worth getting into” as @avturchin’s line of reasoning is based on multiple different small confusions, addressing each of which would be a huge chore and is only tangetially relevant to the topic of the post, in the first place. I agree with this assessment today. But I will present the general outline of what is wrong with it for you and the future readers.
First of all, Gott’s version of DA is different from the version of DA, I’m talking about in this post. Its a different mathematical model, that is based on the number of years humanity exists, instead of number of humans and returns a different estimate for extinction: 97.5% confidence for extinction in the next 8 million years, assuming that humanity existed for 200000 years, regardless of birthrates. Suffice to say, these two version of DA produce different predictions, and by shifting some free parameters in the models we can get even more different predictions still. This is completely expected if DA arguments are wrong.
Likewise Laplace sunrise is yet another mathematical model and a certain interpretation of it produces vaguely similar result to Gott’s version of DA. Assuming LS being applicable, this isn’t really an argument in favor of GDA or by kind of anthropic reasoning. Imagine if the correct answer to a test question is 1⁄5002, while your reasoning, which makes an extra assumption, produces an answer 1⁄5000. Clearly, it doesn’t mean that your reasoning is correct, nor justifies the extra assumption.
And then there is a whole different question of applicability of LS to the situation at hand. Which also doesn’t fully capture our knowledge state, but at least it’s less wrong, in a sense, as it doesn’t make the particular mistake which I’m talking about in this post.
I think there is a subtle difference between DA and Laplace:
Laplace predicts the “minimal probability”: there is at least 4999 / 5000 chance that the
Sun will rise tomorrowhumanity will not extinct next year . DA predicts necessity: there is 1 in 5000 chance that humanity will extinct next year.So Laplace supports reverse Doomsday argument: that the end can’t be very nigh.
But Laplace doesn’t predict that humanity will inevitably extinct at the age 10 times longer than it has now. DA instead predicts that the chances to survive to such age are 10 per cent.