Over the last few centuries, the absurdity heuristic has done worse than maximum entropy—ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying “I don’t know”.
Really? I doubt it.
On the set of things that looked absurd 100 years ago, but have actually happened, I’m quite sure you’re correct. But of course, that’s a highly self-selected sample.
On the set of all possible predictions about the future that were made in 1900? Probably not.
I recall reading not long ago, a list of predictions made about technological and social changes expected during the 20th century, written in 1900. Might have been linked from a previous discussion on this blog, in fact. The surprising thing to me was not how many predictions were way off (quite a few), but how many were dead on, or about as close as they could have been presented in the language and concepts known in 1900 (maybe half).
I’m not going to claim that anti-absurdity is a good heuristic, but I don’t think you’re judging it quite fairly here. I think it’s a fair bit better than maximum entropy.
The problem is that by declaring something “Absurd” you’re making a very strong bet against it. You’re going to lose a fair number of these bets.
Suppose calling something absurd merely means it’s 1% probable. If you’re right about that 90% of the time, each one you get wrong costs you a factor of 10 on your accuracy, far more than you gain from ascribing the extra 9% probability to the other 9 cases you happened to be right. And 1% is high enough few would call it truly absurd.
Calling something absurd is asking to be smacked hard (in terms of accuracy) if you’re wrong—and feeling safe about it.
Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses—the determination of whether a claim/prediction has surface plausibility. If not we file it under “absurd”. An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.
On the other hand, we have the sense of “Absurd!” as a very strong negative claim about something’s probability of truth. So “Absurd!” stands in for “less than .01/.001/whatever”, instead of a term such as “unlikely” which might mean “less than .15”
I was talking only about the first sense. It seemed to me that Eliezer was making a very strong claim that the absurdity heuristic (in the first sense) does no better than maximum entropy. That’s equivalent to saying that surface plausibility or lack thereof amounts to zero evidence. That allowing yourself to modify probabilities downward due to “absurdity” even a small amount would be an error.
I strongly doubt that this is the case.
I agree completely that a claim of “Absurd!” in the second sense about a long-dated future prediction cannot ever be justified merely by absurdity in the first sense.
Over the last few centuries, the absurdity heuristic has done worse than maximum entropy—ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying “I don’t know”.
Really? I doubt it.
On the set of things that looked absurd 100 years ago, but have actually happened, I’m quite sure you’re correct. But of course, that’s a highly self-selected sample.
On the set of all possible predictions about the future that were made in 1900? Probably not.
I recall reading not long ago, a list of predictions made about technological and social changes expected during the 20th century, written in 1900. Might have been linked from a previous discussion on this blog, in fact. The surprising thing to me was not how many predictions were way off (quite a few), but how many were dead on, or about as close as they could have been presented in the language and concepts known in 1900 (maybe half).
I’m not going to claim that anti-absurdity is a good heuristic, but I don’t think you’re judging it quite fairly here. I think it’s a fair bit better than maximum entropy.
The problem is that by declaring something “Absurd” you’re making a very strong bet against it. You’re going to lose a fair number of these bets.
Suppose calling something absurd merely means it’s 1% probable. If you’re right about that 90% of the time, each one you get wrong costs you a factor of 10 on your accuracy, far more than you gain from ascribing the extra 9% probability to the other 9 cases you happened to be right. And 1% is high enough few would call it truly absurd.
Calling something absurd is asking to be smacked hard (in terms of accuracy) if you’re wrong—and feeling safe about it.
Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses—the determination of whether a claim/prediction has surface plausibility. If not we file it under “absurd”. An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.
On the other hand, we have the sense of “Absurd!” as a very strong negative claim about something’s probability of truth. So “Absurd!” stands in for “less than .01/.001/whatever”, instead of a term such as “unlikely” which might mean “less than .15”
I was talking only about the first sense. It seemed to me that Eliezer was making a very strong claim that the absurdity heuristic (in the first sense) does no better than maximum entropy. That’s equivalent to saying that surface plausibility or lack thereof amounts to zero evidence. That allowing yourself to modify probabilities downward due to “absurdity” even a small amount would be an error.
I strongly doubt that this is the case.
I agree completely that a claim of “Absurd!” in the second sense about a long-dated future prediction cannot ever be justified merely by absurdity in the first sense.