There is quite a gap between wanting to be rational and wanting to know how unbiased you are. Since the test is self-administered, pursuing the first desire could easily lead to a favourable, biased, seemingly rational test result. This result would be influenced by personal expectations, and it’s reliability is null according to Löb’s Theorem. The latter desire implies one being open to his biased state and states his purpose of assessing some sort of bias/rational balance. This endeavour is more profitable than the previous because, hopefully, it offers actionable information.
Perhaps one could have a good shot at finding out more about his biases by making quick judgements and later trying to contemplate various aspects and sequences of his or her judgement with accounting of seemingly absurd alternatives and attention paid to the smallest of details. The result should occur as a percentage of correct/faulty conclusions. Apart from discovering some sort of rational/biased ratio in a line of thought, this process should automatically bring one closer to being rational by the memorizing of judgement flaws, their sources and pattern, and by the development of a habit for righteous thinking from a rationality point of view.
This test could have a much more reliable result when performed on someone else by providing all necessary information for a right conclusion to be reached together with vague, inconclusive information for incorrect conclusions to be reached, and great incentives for reaching some of the wrong conclusions.
Speaking of incentives, I believe anyone trying to be as rational as possible within a group could be influenced by group values and beliefs. Therefore, trying to find out biases within the group’s/group members’ judgements could be correlated with one’s affinity for that group. Rationality should be neutral, but neutrality is seldom a group value so chances are high that instinctive-rationalists will be outliers. The tendency to agree with beliefs is probably as wrong as the tendency of finding biases, the two depending on one’s grade of sympathy for a specific group.
Identifying exterior biases will be an unreliable measure of one’s rationality, because of the incentives which exist in interacting with others and also because there is usually little information on exterior thought processes which led to specific outcomes. Also, beliefs widely spread across a social system can have consequences that seemingly prove those beliefs even without their being rational, in which case, comparing one’s judgement to facts would be an indicator of power rather than rationality.
There is quite a gap between wanting to be rational and wanting to know how unbiased you are. Since the test is self-administered, pursuing the first desire could easily lead to a favourable, biased, seemingly rational test result. This result would be influenced by personal expectations, and it’s reliability is null according to Löb’s Theorem. The latter desire implies one being open to his biased state and states his purpose of assessing some sort of bias/rational balance. This endeavour is more profitable than the previous because, hopefully, it offers actionable information.
Perhaps one could have a good shot at finding out more about his biases by making quick judgements and later trying to contemplate various aspects and sequences of his or her judgement with accounting of seemingly absurd alternatives and attention paid to the smallest of details. The result should occur as a percentage of correct/faulty conclusions. Apart from discovering some sort of rational/biased ratio in a line of thought, this process should automatically bring one closer to being rational by the memorizing of judgement flaws, their sources and pattern, and by the development of a habit for righteous thinking from a rationality point of view.
This test could have a much more reliable result when performed on someone else by providing all necessary information for a right conclusion to be reached together with vague, inconclusive information for incorrect conclusions to be reached, and great incentives for reaching some of the wrong conclusions.
Speaking of incentives, I believe anyone trying to be as rational as possible within a group could be influenced by group values and beliefs. Therefore, trying to find out biases within the group’s/group members’ judgements could be correlated with one’s affinity for that group. Rationality should be neutral, but neutrality is seldom a group value so chances are high that instinctive-rationalists will be outliers. The tendency to agree with beliefs is probably as wrong as the tendency of finding biases, the two depending on one’s grade of sympathy for a specific group.
Identifying exterior biases will be an unreliable measure of one’s rationality, because of the incentives which exist in interacting with others and also because there is usually little information on exterior thought processes which led to specific outcomes. Also, beliefs widely spread across a social system can have consequences that seemingly prove those beliefs even without their being rational, in which case, comparing one’s judgement to facts would be an indicator of power rather than rationality.