What about the gambler who knows not of the gambler’s fallacy, and believes that because the die hasn’t rolled an odd number for the past n turns, that it would definitely roll odd this time (afterall, the probability of not rolling odd n times is 2-n). Are they then rational for betting the majority of their fund on the die rolling odd? Letting what’s rational depend on the knowledge of the agent involved, leads to a very broad (and possibly useless) notion of rationality. It may lead to what I call “folk rationality” (doing what you think would lead to success).
I think it depends where the knowledge comes from, right?
If he just has an instinct that a 6 should come up again, but can’t explain where that instinct comes from or defend that belief in any kind of rational way other then “it feels right”, then he’s probably not being rational.
If he actually did an experiment and rolled a dice a bunch of times, and just by coincidence it actually seemed to come out that whenever a 6 hadn’t come out for a while it would show up, then it might be a rational belief, even though it is incorrect. Granted, if he had better knowledge of statistical methods and such he probably could have ran the experiment in a better way, but I think if someone gathers actual data and uses that to arrive at an incorrect belief and then acts on that belief, he’s still behaving rationally. Same thing if you developed your beliefs through other rational methods, like logical deduction based on other beliefs you already had established through rational means, or probabilistic beliefs based on some combination of other things you believe to be true and observations, ect.
A rational agent can not actually know everything, all the rational agent can do is act on the best information it has. And you can only spend so much in the way of resources and time trying to perfect that information before acting on it.
So, I would say rationality is defined by:
A- how did you arrive at your beliefs of the state of the world, and
B- did you act in a way that would maximize your chances of “winning”,if your beliefs formed via rational methods are correct
If he just has an instinct that a 6 should come up again, but can’t explain where that instinct comes from or defend that belief in any kind of rational way other then “it feels right”, then he’s probably not being rational.
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible.
This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so ‘give high credence to black box when it says something’ may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
Eh. Maybe, but I think that any idea which seriously underpins your actions and other belief systems in an important way should be something you can justify in a rational way. It doesn’t mean you always need to think about it in that way, some things become “second nature” over time, but you should be able to explain rational underpinnings if asked.
If you’re talking about a trained skill, “I’ve been fixing cars for 20 years and in my experience when you do x you tend to get better results then when you do y” is a perfectly rational reason to have a belief. So is “That’s what it said in my medical school textbook”, ect.
But, in my experience, people who put too much faith in their “black boxes” and don’t ever think through the basis of their beliefs tend to behave in systematically irrational ways that probably harm them.
Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can’t ever be fully justified, because Agrippa’s trilemma—they’re either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations… something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?
Yes that is a very good point. My current view is that the reason for this is a confusion between seeing knowledge as based on rationality when it is in reality based on experience. Rationality is the manipulation of basic experiential building blocks and these ‘belief’ blocks might correspond to reality or not. With the scientific method this correspondence has been clarified to such an extend that it seems as knowledge is generated purely through rationality but that is because we don’t tend to follow our assumptions to the limits you are describing in your comment. If we check our assumptions and then our assumptions behind our assumptions etc. we will reach our fundamental presuppositions.
Yeah, that’s a good point; one some level, any purely logical system always has to start with certain axioms that you can’t prove within that system, and in the real world that’s probably even more true.
I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don’t end up conflicting with other axioms?
I think it depends where the knowledge comes from, right?
If he just has an instinct that a 6 should come up again, but can’t explain where that instinct comes from or defend that belief in any kind of rational way other then “it feels right”, then he’s probably not being rational.
If he actually did an experiment and rolled a dice a bunch of times, and just by coincidence it actually seemed to come out that whenever a 6 hadn’t come out for a while it would show up, then it might be a rational belief, even though it is incorrect. Granted, if he had better knowledge of statistical methods and such he probably could have ran the experiment in a better way, but I think if someone gathers actual data and uses that to arrive at an incorrect belief and then acts on that belief, he’s still behaving rationally. Same thing if you developed your beliefs through other rational methods, like logical deduction based on other beliefs you already had established through rational means, or probabilistic beliefs based on some combination of other things you believe to be true and observations, ect.
A rational agent can not actually know everything, all the rational agent can do is act on the best information it has. And you can only spend so much in the way of resources and time trying to perfect that information before acting on it.
So, I would say rationality is defined by:
A- how did you arrive at your beliefs of the state of the world, and
B- did you act in a way that would maximize your chances of “winning”,if your beliefs formed via rational methods are correct
Maybe in the specific example of randomness, but I don’t think you can say the general case of ‘it feels so’ is indefensible. This same mechanism is used for really complicated black box intuitive reasoning that underpins any trained skill. So in in areas one has a lot of experience in, or areas which are evolutionary keyed in such as social interactions or in nature this isn’t an absurd belief.
In fact, knowing that these black box intuitions exist means they they have to be included in our information about the world, so ‘give high credence to black box when it says something’ may be the best strategy if ones ability for analytic reasoning is insufficient to determine strategies with results better than that.
Eh. Maybe, but I think that any idea which seriously underpins your actions and other belief systems in an important way should be something you can justify in a rational way. It doesn’t mean you always need to think about it in that way, some things become “second nature” over time, but you should be able to explain rational underpinnings if asked.
If you’re talking about a trained skill, “I’ve been fixing cars for 20 years and in my experience when you do x you tend to get better results then when you do y” is a perfectly rational reason to have a belief. So is “That’s what it said in my medical school textbook”, ect.
But, in my experience, people who put too much faith in their “black boxes” and don’t ever think through the basis of their beliefs tend to behave in systematically irrational ways that probably harm them.
Its funny, I think this is probably always true as a guideline (that you should try and justify all your ideas) but might always break down in practice (all your ideas probably can’t ever be fully justified, because Agrippa’s trilemma—they’re either justified in terms of each other, or not justified, and if they are justified in terms of other ideas, they eventually either are eventually circularly justified, or continue on into infinite regress, or are justified by things that are unjustified). We might get some ground by separating out ideas from evidence, and say we accept as axiomatic anything that is evidenced by inference until we gain additional facts that lend context that resituates our model so that it can include previous observations… something like that. Or it might be we just have to grandfather in some rules to avoid that Godelian stuff. Thoughts?
Yes that is a very good point. My current view is that the reason for this is a confusion between seeing knowledge as based on rationality when it is in reality based on experience. Rationality is the manipulation of basic experiential building blocks and these ‘belief’ blocks might correspond to reality or not. With the scientific method this correspondence has been clarified to such an extend that it seems as knowledge is generated purely through rationality but that is because we don’t tend to follow our assumptions to the limits you are describing in your comment. If we check our assumptions and then our assumptions behind our assumptions etc. we will reach our fundamental presuppositions.
Yeah, that’s a good point; one some level, any purely logical system always has to start with certain axioms that you can’t prove within that system, and in the real world that’s probably even more true.
I guess, ideally, you would want to be able to at least identify which of your ideas are axioms, and keep an eye on them in some sense to make sure that at least they don’t end up conflicting with other axioms?