I disagree… I think “limited analysis resources” accounts for the very difference you speak of. I think the “rituals of cognition” you mention are themselves subjection to rationality analysis: if I’m understanding you correctly, you are talking about someone who knows how to be rational in theory but cannot implement such theory in practice. I think you run into three possibilities there.
One, the person has insufficient analytical resources to translate their theory into action, which Robin accounts for. The person is still rational, given their budget constraint.
Two, the person could gain the ability to make the proper translation, but the costs of doing so are so high that the person is better off with the occasional translation error. The person rationally chooses not to learn better translation techniques.
Three, the person systematically makes mistakes in the translations. That, I think, we can fairly call a bias, which is what we’re trying to avoid here. The person is acting irrationally—if there is a predictable bias, it should have been corrected for.
On your last point: “[Robin would] give someone “rationality points” for coming up with a better algorithm that requires less clock cycles, while I would just give them “cleverness points”.”
I think I have to side with Robin here. On certain issues it might not matter how quickly or efficiently the rational result is arrived at, but I think in almost all situations coming up with a faster way to arrive at a rational result is more rational, since individuals face constraints of time and resources. While the faster algorithm isn’t more rational on a single, isolated issue [assuming they both lead to the same rational result], the person would be able to move on to a different issue faster and thus have more resources available to be more rational in a different setting.
I disagree… I think “limited analysis resources” accounts for the very difference you speak of. I think the “rituals of cognition” you mention are themselves subjection to rationality analysis: if I’m understanding you correctly, you are talking about someone who knows how to be rational in theory but cannot implement such theory in practice. I think you run into three possibilities there.
One, the person has insufficient analytical resources to translate their theory into action, which Robin accounts for. The person is still rational, given their budget constraint.
Two, the person could gain the ability to make the proper translation, but the costs of doing so are so high that the person is better off with the occasional translation error. The person rationally chooses not to learn better translation techniques.
Three, the person systematically makes mistakes in the translations. That, I think, we can fairly call a bias, which is what we’re trying to avoid here. The person is acting irrationally—if there is a predictable bias, it should have been corrected for.
On your last point: “[Robin would] give someone “rationality points” for coming up with a better algorithm that requires less clock cycles, while I would just give them “cleverness points”.” I think I have to side with Robin here. On certain issues it might not matter how quickly or efficiently the rational result is arrived at, but I think in almost all situations coming up with a faster way to arrive at a rational result is more rational, since individuals face constraints of time and resources. While the faster algorithm isn’t more rational on a single, isolated issue [assuming they both lead to the same rational result], the person would be able to move on to a different issue faster and thus have more resources available to be more rational in a different setting.