“If someone had a reason to seriously present it, then I’d not dismiss it out of hand”
It’s important to not that no one has. You can’t update on fictitious evidence.
In the (unlikely) case that something unprobable ends up with strong evidence backing it, then it becomes probable whether or not it was called “absurd”. Until then, we dismiss it because it’s absurd.
I think that absurdity, in this sense, is just an example of Occam’s Razor / Bayesian rationalty in practice. If something has a low prior, and we’ve no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn’t true.
I’ve always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It’s a measure of how plausible an idea feels to our gut. By this definition, absurdity is being used as a proxy for “low probability estimate, rationally assigned”.
It’s often a good proxy, but not always.
Or perhaps another way to put it: when evidence seems to point to an extremely unlikely conclusion, we tend to doubt the accuracy of the evidence. And the absurdity bias is a tendency to doubt the evidence more thoroughly than ideal rationality would demand.
(Admission: I’ve noticed that I’ve had some trouble defining the bias, and now I’m considering the possibility that “absurdity bias” is a less useful concept than I thought it was).
“If someone had a reason to seriously present it, then I’d not dismiss it out of hand”
It’s important to not that no one has. You can’t update on fictitious evidence.
In the (unlikely) case that something unprobable ends up with strong evidence backing it, then it becomes probable whether or not it was called “absurd”. Until then, we dismiss it because it’s absurd.
I think that absurdity, in this sense, is just an example of Occam’s Razor / Bayesian rationalty in practice. If something has a low prior, and we’ve no evidence that would make us raise our probability estimates, then we should believe that the idea probably isn’t true.
I’ve always assumed that the absurdity bias was a tendency to do something slightly different. In this context, absurdity is a measure of how closely an idea conforms to our usual experiences. It’s a measure of how plausible an idea feels to our gut. By this definition, absurdity is being used as a proxy for “low probability estimate, rationally assigned”.
It’s often a good proxy, but not always.
Or perhaps another way to put it: when evidence seems to point to an extremely unlikely conclusion, we tend to doubt the accuracy of the evidence. And the absurdity bias is a tendency to doubt the evidence more thoroughly than ideal rationality would demand.
(Admission: I’ve noticed that I’ve had some trouble defining the bias, and now I’m considering the possibility that “absurdity bias” is a less useful concept than I thought it was).