Weird, I’m totally in the other boat. I think we can use sub-1% or super-99% probabilities easily, all the time.
I just went on a long road trip. What probability should I have used that my car springs a brake fluid leak slow enough that it’s going to be useful for me to have a can of brake fluid in the car? I’d guess it happens once every 250k miles or so, and I just drove about 1k, so that’s about 1 in 250 (or let’s say 1 in 500 to guesstimate at the effect of doing highway driving). Bam, sub-1% probability. Now, did I need to consciously evaluate the probabilities to decide that I should definitely bring engine oil, might as well bring brake fluid but it’s not super important, and not need to bring a bicycle pump? No. But it you ask me, I don’t see what’s stopping me from giving totally reasonable probabilities for needing these things.
I think there’s a perspective that can synthesize both of these observations.
I could easily write a list of predictions of which less than 1 in 10,000 would be false:
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00001 UTC
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00002 UTC
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00003 UTC
etc...
Of course, I’m “cheating”. There seem to be less than 100 consciously distinct plausibility values for me (or probably anyone). What I actually believe in this situation are several facts about how the Sun, Earth, time, and shining work which I believe at the highest plausibility value I can distinguish/track (something like >99.5%). I’m able to logically synthesize these into the above class of statements, from which I can deduce that the implied probability of those statements is quite high (much more than 99.5% likely to hold). This is an important part of what makes abstraction so powerful.
If you asked me for 10,000 true statements of which I could not explicitly logically connect any of them, I would be surprised if more than 99.5% of them were actually true, even putting my highest possible level of care and effort into it. I think this is an inherent limitation of how my mind works: there just isn’t a distinguishable plausibility value that I can use to distinguish these (which is an inherent limitation of being a bounded agent).
The key, I think, is that there is an important sense in which we can be more certain of logical deductions than intuitive beliefs, notwithstanding the fact that we are prone to making logical errors (for example, I used redundant lines of reasoning and large margins for error to generate the above example). It’s easy to be overconfident, but it’s almost as easy to be too pessimistic about what we can know.
Weird, I’m totally in the other boat. I think we can use sub-1% or super-99% probabilities easily, all the time.
I just went on a long road trip. What probability should I have used that my car springs a brake fluid leak slow enough that it’s going to be useful for me to have a can of brake fluid in the car? I’d guess it happens once every 250k miles or so, and I just drove about 1k, so that’s about 1 in 250 (or let’s say 1 in 500 to guesstimate at the effect of doing highway driving). Bam, sub-1% probability. Now, did I need to consciously evaluate the probabilities to decide that I should definitely bring engine oil, might as well bring brake fluid but it’s not super important, and not need to bring a bicycle pump? No. But it you ask me, I don’t see what’s stopping me from giving totally reasonable probabilities for needing these things.
I think there’s a perspective that can synthesize both of these observations.
I could easily write a list of predictions of which less than 1 in 10,000 would be false:
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00001 UTC
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00002 UTC
The Sun will be shining somewhere on Earth at 2022-02-19 18:34:25.00003 UTC etc...
Of course, I’m “cheating”. There seem to be less than 100 consciously distinct plausibility values for me (or probably anyone). What I actually believe in this situation are several facts about how the Sun, Earth, time, and shining work which I believe at the highest plausibility value I can distinguish/track (something like >99.5%). I’m able to logically synthesize these into the above class of statements, from which I can deduce that the implied probability of those statements is quite high (much more than 99.5% likely to hold). This is an important part of what makes abstraction so powerful.
If you asked me for 10,000 true statements of which I could not explicitly logically connect any of them, I would be surprised if more than 99.5% of them were actually true, even putting my highest possible level of care and effort into it. I think this is an inherent limitation of how my mind works: there just isn’t a distinguishable plausibility value that I can use to distinguish these (which is an inherent limitation of being a bounded agent).
The key, I think, is that there is an important sense in which we can be more certain of logical deductions than intuitive beliefs, notwithstanding the fact that we are prone to making logical errors (for example, I used redundant lines of reasoning and large margins for error to generate the above example). It’s easy to be overconfident, but it’s almost as easy to be too pessimistic about what we can know.