I have suddenly become mildly interested in investigating an edge case of this argument. I am not coming at this from the perspective of defending the statement of infinite certainty, it is only useful in certain nonsense-arguments. I just found it kinda fun, and maybe an answer would improve my understanding of the rhythm behind this post.
So, let’s suppose you have a statement so utterly trivial and containing so little practical sense you wouldn’t even think of it as a worthwhile statement, for example “A is A”. Now, this is a bad example, because you can already see the nonsense incoming, but I’m not sure if there are any good ones. Let’s then go by the practical definition of certainty of 1-1/1000 -- you need to collect about 1000 statements on the same level of certainty but with different actual drivers behind them, say them out loud and be wrong once. The only problem is there are, like, 10 of these, ever. The other ones are too complex and can’t be put on the same team I-am-sure-of-this. So you can’t properly measure if you are sure of this even on the scale of 90%. Technically, this may pass as a candidate to be 1.0 certainty, by the “say and be wrong rarely” definition, because there ever are only this much of such trivial statements, and the probability of you being correct on all of them, no matter how many times you repeat, is substantial.
I also don’t think it would bother me much if I was stripped of possibility of changing my mind about “A is A”.
I have suddenly become mildly interested in investigating an edge case of this argument. I am not coming at this from the perspective of defending the statement of infinite certainty, it is only useful in certain nonsense-arguments. I just found it kinda fun, and maybe an answer would improve my understanding of the rhythm behind this post.
So, let’s suppose you have a statement so utterly trivial and containing so little practical sense you wouldn’t even think of it as a worthwhile statement, for example “A is A”. Now, this is a bad example, because you can already see the nonsense incoming, but I’m not sure if there are any good ones. Let’s then go by the practical definition of certainty of 1-1/1000 -- you need to collect about 1000 statements on the same level of certainty but with different actual drivers behind them, say them out loud and be wrong once. The only problem is there are, like, 10 of these, ever. The other ones are too complex and can’t be put on the same team I-am-sure-of-this. So you can’t properly measure if you are sure of this even on the scale of 90%. Technically, this may pass as a candidate to be 1.0 certainty, by the “say and be wrong rarely” definition, because there ever are only this much of such trivial statements, and the probability of you being correct on all of them, no matter how many times you repeat, is substantial.
I also don’t think it would bother me much if I was stripped of possibility of changing my mind about “A is A”.