Good point about infinite certainty, poor example.
Assert 99.9999999999% confidence, and you’re taking it up to a trillion. Now you’re going to talk for a hundred human lifetimes, and not be wrong even once?
evidence that convinced me that 2 + 2 = 4 in the first place.
“(the sum of) 2 + 2” means “4“; or to make it more obvious, “1 + 1” means “2”. These aren’t statements about the real world*, hence they’re not subject to falsification, they contain no component of ignorance, and they don’t fall under the purview of probability theory.
*Here your counter has been that meaning is in the brain and the brain is part of the real world. Yet such a line of reasoning, even if it weren’t based on a category error, proves too much: it cuts the ground from under your absolute certainty in the Bayesian approach—the same certainty you needed in order to make accurate statements about 99.99---% probabilities in the first place.
The laws of probability are only useful for rationality if you know when they do and don’t apply.
We don’t have absolute certainty in ‘the Bayesian approach’. It would be counter-productive at best if we did, since then our certainty would be too great for evidence from the world to change our mind, hence we’d have no reason to think that if the evidence did contradict ‘the Bayesian approach’, we’d believe differently. In other words, we’d have no reason as Bayesians to believe our belief, though we’d remain irrationally caught in the grips of that delusion.
Even assuming that it’s a matter of word meanings that the four millionth digit of pi is 0, you can still be uncertain about that fact, and Bayesian reasoning applies to such uncertainty in precisely the same way that it applies to anything else. You can acquire new evidence that makes you revise your beliefs about mathematical theorems, etc.
Good point about infinite certainty, poor example.
Leaky induction. Didn’t that feel a little forced?
“(the sum of) 2 + 2” means “4“; or to make it more obvious, “1 + 1” means “2”. These aren’t statements about the real world*, hence they’re not subject to falsification, they contain no component of ignorance, and they don’t fall under the purview of probability theory.
*Here your counter has been that meaning is in the brain and the brain is part of the real world. Yet such a line of reasoning, even if it weren’t based on a category error, proves too much: it cuts the ground from under your absolute certainty in the Bayesian approach—the same certainty you needed in order to make accurate statements about 99.99---% probabilities in the first place.
The laws of probability are only useful for rationality if you know when they do and don’t apply.
We can be wrong about what the words we use mean.
What category error would that be?
We don’t have absolute certainty in ‘the Bayesian approach’. It would be counter-productive at best if we did, since then our certainty would be too great for evidence from the world to change our mind, hence we’d have no reason to think that if the evidence did contradict ‘the Bayesian approach’, we’d believe differently. In other words, we’d have no reason as Bayesians to believe our belief, though we’d remain irrationally caught in the grips of that delusion.
Even assuming that it’s a matter of word meanings that the four millionth digit of pi is 0, you can still be uncertain about that fact, and Bayesian reasoning applies to such uncertainty in precisely the same way that it applies to anything else. You can acquire new evidence that makes you revise your beliefs about mathematical theorems, etc.