But what I found even more fascinating was the qualitative distinction between “certain” and “uncertain” arguments, where if an argument is not certain, you’re allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you’re allowed to keep it.
I think that’s exactly what’s going on. These people you speak of who do this are mentally dealing with social permission, not with probability algebra. The non-zero probability gives them social permission to describe it as “it might happen”, and the detail that the probability is 1 / googolplex stands a good chance of getting ignored, lost, or simply not appreciated. (Similarly, the tiny uncertainty)
And I don’t just mean that it works in conversation. The person who makes this mistake has probably internalized it too.
It struck me that way when I read your opening anecdote. Your interlocutor talked like a lawyer who was planning on bringing up that point in closing arguments—“Mr Yudkowsky himself admitted there’s a chance apes and humans are not related”—and not bringing up the minuscule magnitude of the chance, of course.
I think that’s exactly what’s going on. These people you speak of who do this are mentally dealing with social permission, not with probability algebra. The non-zero probability gives them social permission to describe it as “it might happen”, and the detail that the probability is 1 / googolplex stands a good chance of getting ignored, lost, or simply not appreciated. (Similarly, the tiny uncertainty)
And I don’t just mean that it works in conversation. The person who makes this mistake has probably internalized it too.
It struck me that way when I read your opening anecdote. Your interlocutor talked like a lawyer who was planning on bringing up that point in closing arguments—“Mr Yudkowsky himself admitted there’s a chance apes and humans are not related”—and not bringing up the minuscule magnitude of the chance, of course.