If he just has an instinct that a 6 should come up again, but can’t explain where that instinct comes from or defend that belief in any kind of rational way other then “it feels right”, then he’s probably not being rational.
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible.
This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so ‘give high credence to black box when it says something’ may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
Eh. Maybe, but I think that any idea which seriously underpins your actions and other belief systems in an important way should be something you can justify in a rational way. It doesn’t mean you always need to think about it in that way, some things become “second nature” over time, but you should be able to explain rational underpinnings if asked.
If you’re talking about a trained skill, “I’ve been fixing cars for 20 years and in my experience when you do x you tend to get better results then when you do y” is a perfectly rational reason to have a belief. So is “That’s what it said in my medical school textbook”, ect.
But, in my experience, people who put too much faith in their “black boxes” and don’t ever think through the basis of their beliefs tend to behave in systematically irrational ways that probably harm them.
Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can’t ever be fully justified, because Agrippa’s trilemma—they’re either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations… something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?
Yes that is a very good point. My current view is that the reason for this is a confusion between seeing knowledge as based on rationality when it is in reality based on experience. Rationality is the manipulation of basic experiential building blocks and these ‘belief’ blocks might correspond to reality or not. With the scientific method this correspondence has been clarified to such an extend that it seems as knowledge is generated purely through rationality but that is because we don’t tend to follow our assumptions to the limits you are describing in your comment. If we check our assumptions and then our assumptions behind our assumptions etc. we will reach our fundamental presuppositions.
Yeah, that’s a good point; one some level, any purely logical system always has to start with certain axioms that you can’t prove within that system, and in the real world that’s probably even more true.
I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don’t end up conflicting with other axioms?
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so ‘give high credence to black box when it says something’ may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.
Eh. Maybe, but I think that any idea which seriously underpins your actions and other belief systems in an important way should be something you can justify in a rational way. It doesn’t mean you always need to think about it in that way, some things become “second nature” over time, but you should be able to explain rational underpinnings if asked.
If you’re talking about a trained skill, “I’ve been fixing cars for 20 years and in my experience when you do x you tend to get better results then when you do y” is a perfectly rational reason to have a belief. So is “That’s what it said in my medical school textbook”, ect.
But, in my experience, people who put too much faith in their “black boxes” and don’t ever think through the basis of their beliefs tend to behave in systematically irrational ways that probably harm them.
Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can’t ever be fully justified, because Agrippa’s trilemma—they’re either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations… something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?
Yes that is a very good point. My current view is that the reason for this is a confusion between seeing knowledge as based on rationality when it is in reality based on experience. Rationality is the manipulation of basic experiential building blocks and these ‘belief’ blocks might correspond to reality or not. With the scientific method this correspondence has been clarified to such an extend that it seems as knowledge is generated purely through rationality but that is because we don’t tend to follow our assumptions to the limits you are describing in your comment. If we check our assumptions and then our assumptions behind our assumptions etc. we will reach our fundamental presuppositions.
Yeah, that’s a good point; one some level, any purely logical system always has to start with certain axioms that you can’t prove within that system, and in the real world that’s probably even more true.
I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don’t end up conflicting with other axioms?