I agree that a picture of many weak arguments supporting or undermining explicit claims does not capture whet humans do—the inferences themselves are much more complex than logical deductions, such that we don’t yet know any way of representing the actual objects that are being manipulated. I think this is the mainstream view, certainly in AI now.
I don’t know what it means to say that our pattern recognition capabilities are stronger than our logical reasoning; they are two different kinds of cognitive tasks. It seems like saying that we are much better at running fast than lifting heavy objects. Sometimes you can do a task in one way or the other, and we might say that one or the other is better way to get something done. And we can compare to other animals, or to machines, and talk about comparative advantage. And so on.
Perhaps the most relevant claim would be asking what fraction of variance in outcomes is described by one characteristic or another, or in which domain is practice most helpful. I think that explicit principles about how to reason do not distinguish good mathematicians from each other, though they may distinguish mathematicians at different times. The situation seems similar in most endeavors. I think this is because it is so much easier to transfer explicit information between people, so the residual is what differentiates. Learning the explicit info is still the right first step to mastery, though it’s not most of the work.
Improving explicit info and norms seems to be the main way that we progress as a civilization, because that’s what you can build on, share, and so on. But of course, those can be explicit norms about how to train any part of reasoning, and I am pretty agnostic about what kind of reasoning is best to try to improve.
Overall I feel like there is a broad version of your thesis that I think is clearly true. You are no doubt making a more specific claim. I’m interested to see it fleshed out. I can easily see disagreeing with it. If I do, it’s probably because I have a broader sense of / distribution over ways we might use our brain. For example:
“Very high level epistemic rationality is about retraining one’s brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.”
Is a specific way we can use our built in pattern recognition, one could list perhaps half a dozen similar tactics at a similar level of abstraction. I will be interested to see if you have evidence that distinguishes between these principles and singles out one as most important. (You could also interpret your quote in a broad way, in which case I think it is very clear. So I guess I should just wait to discuss!)
Note that the accuracy of this kind of literal analogy, or how far you can take that analogy, is a question that researchers in computer science and AI explicitly discuss. Most everyone agrees that there is some of this, but I think there is legitimate uncertainty about how much.
Also, when you say “don’t make their discoveries through reasoning,” it’s not exactly clear what this means. This might also be something that I disagree with strongly, or it may be something that I think is very clear (or maybe somewhere in between). Logic plays a key role in the mathematician’s reasoning. None of the steps the mathematician makes are valid inferences in any formal proof system (but neither are any steps of most published proofs!), though often she will write or think pairs of statements that are in fact related to each other in a logically precise way, and the existence of such relationships is a key part of why the cognitive process yields correct results.
I agree that a picture of many weak arguments supporting or undermining explicit claims does not capture whet humans do—the inferences themselves are much more complex than logical deductions, such that we don’t yet know any way of representing the actual objects that are being manipulated. I think this is the mainstream view, certainly in AI now.
I don’t know what it means to say that our pattern recognition capabilities are stronger than our logical reasoning; they are two different kinds of cognitive tasks. It seems like saying that we are much better at running fast than lifting heavy objects. Sometimes you can do a task in one way or the other, and we might say that one or the other is better way to get something done. And we can compare to other animals, or to machines, and talk about comparative advantage. And so on.
Perhaps the most relevant claim would be asking what fraction of variance in outcomes is described by one characteristic or another, or in which domain is practice most helpful. I think that explicit principles about how to reason do not distinguish good mathematicians from each other, though they may distinguish mathematicians at different times. The situation seems similar in most endeavors. I think this is because it is so much easier to transfer explicit information between people, so the residual is what differentiates. Learning the explicit info is still the right first step to mastery, though it’s not most of the work.
Improving explicit info and norms seems to be the main way that we progress as a civilization, because that’s what you can build on, share, and so on. But of course, those can be explicit norms about how to train any part of reasoning, and I am pretty agnostic about what kind of reasoning is best to try to improve.
Overall I feel like there is a broad version of your thesis that I think is clearly true. You are no doubt making a more specific claim. I’m interested to see it fleshed out. I can easily see disagreeing with it. If I do, it’s probably because I have a broader sense of / distribution over ways we might use our brain. For example:
“Very high level epistemic rationality is about retraining one’s brain to be able to see patterns in the evidence in the same way that we can see patterns when we observe the world with our eyes.”
Is a specific way we can use our built in pattern recognition, one could list perhaps half a dozen similar tactics at a similar level of abstraction. I will be interested to see if you have evidence that distinguishes between these principles and singles out one as most important. (You could also interpret your quote in a broad way, in which case I think it is very clear. So I guess I should just wait to discuss!)
Note that the accuracy of this kind of literal analogy, or how far you can take that analogy, is a question that researchers in computer science and AI explicitly discuss. Most everyone agrees that there is some of this, but I think there is legitimate uncertainty about how much.
Also, when you say “don’t make their discoveries through reasoning,” it’s not exactly clear what this means. This might also be something that I disagree with strongly, or it may be something that I think is very clear (or maybe somewhere in between). Logic plays a key role in the mathematician’s reasoning. None of the steps the mathematician makes are valid inferences in any formal proof system (but neither are any steps of most published proofs!), though often she will write or think pairs of statements that are in fact related to each other in a logically precise way, and the existence of such relationships is a key part of why the cognitive process yields correct results.