“[N]ot to know things that we do in fact know,” and “Confidently inform them when we know they’re wrong.”
Except, as a rationalist, you can’t say that you know there is no god. You may be able to say that you believe it to be unlikely that there is a god, or that you have seen no evidence that would make you believe that there is a god. The fact is that it is (near) impossible to prove a negative. Likewise, you cannot say that you know there is are no purple polar bear, fairies, unicorns or black swans. The burden of proof does always fall to the affirmative, but you can’t rationally and conclusively prove the negative.
Using “know” to mean “have exactly 100% certainty” means you can’t prove a positive either. (I don’t “know” that the computer in front of me exists, but the probability that it is an illusion or trick is low enough for me to ignore.)
What do you think you’re adding to the discussion by trotting out this sort of pedantic literalism?
Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don’t you just use your common sense and figure that when they say they “know” something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.
You’re using a definition of “know” that practically nobody would endorse (assuming they also accepted your other premises). Once you have certainties expressed properly as probabilities, a contextualist epistemology falls out naturally. (though there are other nuanced views that would work)
“[N]ot to know things that we do in fact know,” and “Confidently inform them when we know they’re wrong.” Except, as a rationalist, you can’t say that you know there is no god. You may be able to say that you believe it to be unlikely that there is a god, or that you have seen no evidence that would make you believe that there is a god. The fact is that it is (near) impossible to prove a negative. Likewise, you cannot say that you know there is are no purple polar bear, fairies, unicorns or black swans. The burden of proof does always fall to the affirmative, but you can’t rationally and conclusively prove the negative.
Using “know” to mean “have exactly 100% certainty” means you can’t prove a positive either. (I don’t “know” that the computer in front of me exists, but the probability that it is an illusion or trick is low enough for me to ignore.)
LessWrong wiki entry on absolute certainty
What do you think you’re adding to the discussion by trotting out this sort of pedantic literalism?
Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don’t you just use your common sense and figure that when they say they “know” something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.
You’re using a definition of “know” that practically nobody would endorse (assuming they also accepted your other premises). Once you have certainties expressed properly as probabilities, a contextualist epistemology falls out naturally. (though there are other nuanced views that would work)