I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one’s thoughts without interfering with one’s thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.
This is crucial: The thought isn’t to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren’t tripping ourselves up.
Mmm, very good point. Strangely, now that I think about it, this sound very similar to the concept of the highest principle:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
In the comparison between Rationality Recognition and Face Recognition, what is the Rationality Recognition equivalent of sight?
It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.
Thinks about it this way, we take in information all the time, and draw conclusions from it. “Sight” isn’t playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.
Similarly, you see a room full of objects and events. You’ve got one or more “draw conclusions” programs that run on the data you see, and that program can get screwed up by putting things into words that you don’t normally.
The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.
My claim isn’t about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn’t normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn’t necessarily mean spoken aloud, but just ‘thinking through in words’).
My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.
Before we can advance rationality by discussion, we must first learn to discuss rationality.
My claim isn’t about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn’t normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn’t necessarily mean spoken aloud, but just ‘thinking through in words’).
My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.
Before we can advance rationality by discussion, we must first learn to discuss rationality.
Understood. Thanks for the clarification. Going back and rereading the article after these comments made a few more lights click on in my head.
I think the question about which cases to focus on when forming theories is different from the question of which cases to use to train oneself to verbalize one’s thoughts without interfering with one’s thinking. The latter requires us to train on paradigms, the former may be something we can pursue in either direction.
This is crucial: The thought isn’t to presuppose which direction our theorizing should go, but rather to make sure that when we theorize, we aren’t tripping ourselves up.
Mmm, very good point. Strangely, now that I think about it, this sound very similar to the concept of the highest principle:
In the comparison between Rationality Recognition and Face Recognition, what is the Rationality Recognition equivalent of sight?
It depends. Sometimes it will be sight or our other senses, sometimes it will be memory, sometimes it will be testimony.
Thinks about it this way, we take in information all the time, and draw conclusions from it. “Sight” isn’t playing a key role in face recognition except providing the data, you have a mental program for matching visual face data to previous visual face data, and that program gets screwed up if you start thinking through a description of the face after you see it.
Similarly, you see a room full of objects and events. You’ve got one or more “draw conclusions” programs that run on the data you see, and that program can get screwed up by putting things into words that you don’t normally.
The data on insight puzzles shows that if you do manage to draw the right conclusions, and you try to put into words how you did it, you may get screwed up in the following way: you are confident in explanation A for how you drew the conclusion, when, in actuality, the truth is radically different explanation B.
My claim isn’t about rationality recognition per se, it is simply this: psychology has shown that verbalizing can screw us up when dealing with a process that isn’t normally done verbally. And a lot (if not most) of our inferential processes are not done in this explicitly verbalized manner (verbalized doesn’t necessarily mean spoken aloud, but just ‘thinking through in words’).
My claim is that there are known ways to get good at verbalizing non-verbal processes, and they involve training on paradigmatic cases. It is only after such training that one can start thinking about edge cases and the borderlands without worrying that the process of discussing the cases is corrupting their thinking about the cases.
Before we can advance rationality by discussion, we must first learn to discuss rationality.
Understood. Thanks for the clarification. Going back and rereading the article after these comments made a few more lights click on in my head.
So, where do we start?
I guess we find out how to acquire verbal expertise in a given domain, and do so for rationality, reasoning, and inference.