It’s even more interesting to see how people react when faced with arguments that are either very bad or very good, but they can’t tell which. I have described the doomsday argument to many random people. The typical reaction is a kind of nervous laugh, followed by quick dismissal. Not a single one became genuinely curious and tried to work it out. It’s awful.
I don’t think the doomsday argument is a bad argument mathematically. It’s just completely useless, like predicting whether the sun will rise tomorrow using laplace’s rule of succession. We have vast amounts of information that has some bearing one way or another on the likelyhood of the end of the world happening at any particular time. It’s absurd to throw all that away. As such, dismissal seems completely reasonable to me. I really don’t think there is anything to be learned by calculating the expected number of total people ever to exist using nothing but a uniform prior.
It’s quite clearly a very bad argument. It’s an argument where formalising why it’s bad takes a noticeable amount of effort, but noticing that it is bad is almost instant.
To explain why it instantly shows as bad, think about doomsday predictions: they always move back when they don’t happen, right? This is a doomsday prediction designed to move back continuously. Every year the projected future population increases by: ~2,750,000,000
I might give a nervous laugh at being presented with that argument, due to being unsure whether or not you were joking. I would then have quickly dismissed it.
Had you asked me to explain why it was wrong, I would have, but unless I was bored I would have been unlikely to bother debating you on something I find so silly.
Even if it’s really a bad argument, the badness is far from obvious—just look at the Wikipedia page. Robin Hanson doesn’t find it silly, for example. Neither do I: in my opinion, anthropic reasoning is an important mystery because we have no algorithm for determining whether a given anthropic argument is valid. See Eliezer’s posts “Outlawing Anthropics” and “Forcing Anthropics”. Also consider that thinking hard about when anthropic reasoning works and when it doesn’t has led Wei Dai to the central insight of UDT. I don’t believe you have examined the object level as deeply as it deserves. Snap judgments only get us so far. A snap judgment cannot lead you from Zeno’s paradox to discovering calculus.
Even if it’s really a bad argument, the badness is far from obvious—just look at the Wikipedia page.
The fact that people are willing to believe something doesn’t make it not obviously wrong. It just means they are, for whatever reason, blind to it’s obvious wrongness.
For an example of why it fails in real-world terms consider the problem of coming up with the reference class. Humans? Great Apes? Apes? Mammals? Verterbrates? Earth-origin Living Organisms? Each produce a different prediction for the doomsday scenario, but a lot of plausible extinction paths for humans would at least take the rest of the apes with us.
For an example of why it fails the moment we have other evidence, consider Bob. Bob is 40 years old. He believes the doomsday argument. Someone points a gun at Bob, and threatens to kill him if he doesn’t give up his wallet. Bob reasons “There’s only a 0.001% chance that I’m in the last 0.001% of my life; so the danger of me dying in the next two hours is miniscule!”. Is Bob right?
Now suppose that Sean has just turned 21, 3 months ago. Just become an adult. He concludes, from the doomsday argument, that as he’s been an adult for 3 months, he has a 95% of stopping being an adult within 60 months, 5 years. So, he’s going to die within 5 years?
A snap judgment cannot lead you from Zeno’s paradox to discovering calculus.
No, but a snap judgement can lead you to correctly conclude that if each time you halve the distance you halve the time you’re going to have a finite amount of time to cross the line, even if you have an infinite amount of instants.
One nice formulation of the reference class for the DA is “observer-moments that think about the DA”. Maybe there are even better formulations.
About Bob: the question is whether the DA constitutes valid evidence, not whether it’s complete evidence. Of course the gun is stronger. But if you were in a state of near-total ignorance, would the DA not sway you even a little bit?
About Sean: most adults who consider Sean’s “adult doomsday” variation will turn out to be right. You have simply cherry-picked a counterexample. If such tactics were valid for breaking the DA, they would also break all probabilistic reasoning, which isn’t what we want.
It looks to me like you’re trying to fight your way to a preordained conclusion (“see! it was wrong all along!”), this is almost always a bad sign.
One nice formulation of the reference class for the DA is “observer-moments that think about the DA”. Maybe there are even better formulations.
And that might even concievably be a good formulation. That is NOT obviously a bad argument. It may or may not be a good argument, but it’s not obviously bad. I can’t just plug in a word-substitution and get the same argument to say something different without breaking the argument.
It’s also not the argument you presented me with. You presented me with the argument formulated over humans. Which is obviously a bad argument.
About Bob: the question is whether the DA constitutes valid evidence, not whether it’s complete evidence. Of course the gun is stronger. But if you were in a state of near-total ignorance, would the DA not sway you even a little bit?
No, because reference classes that are identical with regard to the present, ie. Humans and Cyborgs. Humans. Humans who live their entire life on Earth. Can be very different. And hypothetical ignorant me would be able to come up with such reference classes, unless hypothetical ignorant me lives in a very very simplified world.
In an extremely simplified world, with my only knowledge being that I am Mr. 989,954,292,132, I might buy into the doomsday argument as regards Mr.s
About Sean: most adults who consider Sean’s “adult doomsday” variation will turn out to be right. You have simply cherry-picked a counterexample. If such tactics were valid for breaking the DA, they would also break all probabilistic reasoning, which isn’t what we want.
True, my apologies, that was an obviously bad argument, and I missed it.
I’ve had prolonged debate with philosphers who honestly seem to believe that colour doesn’t really exist. With Truthers who think that the US government bombed the main two WTC towers; but have no concept as to why the US government would need to do so.
Really? I’m not a Truther but I could come up with a just so story at the drop of a hat.
There’s no need to bomb the towers, risking discovery, when simply having the smouldering towers standing there will be sufficient excuse.
The planes, on their own, accomplish the “give the politicians an excuse” goal. Bombing the towers as well can’t be explained by a goal that’s already achieved.
The evidence and predictions surrounding our ability to extend our lifespans and solve life- and existence-threatening problems is enough to suppose that the human history is not closed at the far end, or not modeled on the same function that pre-actuarial-escape-velocity human history is.
That is, we have good reason to believe we are in the earliest of all humans, because “human” is two sets appended together, and the doomsday argument is based on the statistics of the first set alone.
That is my response to the doomsday argument—I don’t know if it’s rigorous.
I think from a utilitarian point of view it’s very proper to dismiss arguments that have no relevance to real life and no actual predictive capacity -- the doomsday argument, just as quantum immortality, seems to me the modern equivalent of Zeno’s Achilles and the Turtle in irrelevant philosophical silliness.
It’s even more interesting to see how people react when faced with arguments that are either very bad or very good, but they can’t tell which. I have described the doomsday argument to many random people. The typical reaction is a kind of nervous laugh, followed by quick dismissal. Not a single one became genuinely curious and tried to work it out. It’s awful.
I don’t think the doomsday argument is a bad argument mathematically. It’s just completely useless, like predicting whether the sun will rise tomorrow using laplace’s rule of succession. We have vast amounts of information that has some bearing one way or another on the likelyhood of the end of the world happening at any particular time. It’s absurd to throw all that away. As such, dismissal seems completely reasonable to me. I really don’t think there is anything to be learned by calculating the expected number of total people ever to exist using nothing but a uniform prior.
It’s quite clearly a very bad argument. It’s an argument where formalising why it’s bad takes a noticeable amount of effort, but noticing that it is bad is almost instant.
To explain why it instantly shows as bad, think about doomsday predictions: they always move back when they don’t happen, right? This is a doomsday prediction designed to move back continuously. Every year the projected future population increases by: ~2,750,000,000
I might give a nervous laugh at being presented with that argument, due to being unsure whether or not you were joking. I would then have quickly dismissed it.
Had you asked me to explain why it was wrong, I would have, but unless I was bored I would have been unlikely to bother debating you on something I find so silly.
Even if it’s really a bad argument, the badness is far from obvious—just look at the Wikipedia page. Robin Hanson doesn’t find it silly, for example. Neither do I: in my opinion, anthropic reasoning is an important mystery because we have no algorithm for determining whether a given anthropic argument is valid. See Eliezer’s posts “Outlawing Anthropics” and “Forcing Anthropics”. Also consider that thinking hard about when anthropic reasoning works and when it doesn’t has led Wei Dai to the central insight of UDT. I don’t believe you have examined the object level as deeply as it deserves. Snap judgments only get us so far. A snap judgment cannot lead you from Zeno’s paradox to discovering calculus.
The fact that people are willing to believe something doesn’t make it not obviously wrong. It just means they are, for whatever reason, blind to it’s obvious wrongness.
For an example of why it fails in real-world terms consider the problem of coming up with the reference class. Humans? Great Apes? Apes? Mammals? Verterbrates? Earth-origin Living Organisms? Each produce a different prediction for the doomsday scenario, but a lot of plausible extinction paths for humans would at least take the rest of the apes with us.
For an example of why it fails the moment we have other evidence, consider Bob. Bob is 40 years old. He believes the doomsday argument. Someone points a gun at Bob, and threatens to kill him if he doesn’t give up his wallet. Bob reasons “There’s only a 0.001% chance that I’m in the last 0.001% of my life; so the danger of me dying in the next two hours is miniscule!”. Is Bob right?
Now suppose that Sean has just turned 21, 3 months ago. Just become an adult. He concludes, from the doomsday argument, that as he’s been an adult for 3 months, he has a 95% of stopping being an adult within 60 months, 5 years. So, he’s going to die within 5 years?
No, but a snap judgement can lead you to correctly conclude that if each time you halve the distance you halve the time you’re going to have a finite amount of time to cross the line, even if you have an infinite amount of instants.
One nice formulation of the reference class for the DA is “observer-moments that think about the DA”. Maybe there are even better formulations.
About Bob: the question is whether the DA constitutes valid evidence, not whether it’s complete evidence. Of course the gun is stronger. But if you were in a state of near-total ignorance, would the DA not sway you even a little bit?
About Sean: most adults who consider Sean’s “adult doomsday” variation will turn out to be right. You have simply cherry-picked a counterexample. If such tactics were valid for breaking the DA, they would also break all probabilistic reasoning, which isn’t what we want.
It looks to me like you’re trying to fight your way to a preordained conclusion (“see! it was wrong all along!”), this is almost always a bad sign.
And that might even concievably be a good formulation. That is NOT obviously a bad argument. It may or may not be a good argument, but it’s not obviously bad. I can’t just plug in a word-substitution and get the same argument to say something different without breaking the argument.
It’s also not the argument you presented me with. You presented me with the argument formulated over humans. Which is obviously a bad argument.
No, because reference classes that are identical with regard to the present, ie. Humans and Cyborgs. Humans. Humans who live their entire life on Earth. Can be very different. And hypothetical ignorant me would be able to come up with such reference classes, unless hypothetical ignorant me lives in a very very simplified world.
In an extremely simplified world, with my only knowledge being that I am Mr. 989,954,292,132, I might buy into the doomsday argument as regards Mr.s
True, my apologies, that was an obviously bad argument, and I missed it.
[comment deleted]
Really? I’m not a Truther but I could come up with a just so story at the drop of a hat.
As could I. However the average truther has been convinced that it was done as an excuse to go to war.
But I deleted that part of the post for a reason. Politics is the mindkiller and all.
… Lost me. That sounds like a concept as to why to me. (Which is not to say that it is a likely possibility.)
There’s no need to bomb the towers, risking discovery, when simply having the smouldering towers standing there will be sufficient excuse.
The planes, on their own, accomplish the “give the politicians an excuse” goal. Bombing the towers as well can’t be explained by a goal that’s already achieved.
The evidence and predictions surrounding our ability to extend our lifespans and solve life- and existence-threatening problems is enough to suppose that the human history is not closed at the far end, or not modeled on the same function that pre-actuarial-escape-velocity human history is.
That is, we have good reason to believe we are in the earliest of all humans, because “human” is two sets appended together, and the doomsday argument is based on the statistics of the first set alone.
That is my response to the doomsday argument—I don’t know if it’s rigorous.
I think from a utilitarian point of view it’s very proper to dismiss arguments that have no relevance to real life and no actual predictive capacity -- the doomsday argument, just as quantum immortality, seems to me the modern equivalent of Zeno’s Achilles and the Turtle in irrelevant philosophical silliness.