No, it’s better to assume that folk-psychology doesn’t accurately map the mind. “Reversed stupidity is not intelligence.”
Your statement is equivalent to saying, “We’ve seen a beautiful sunset. Clearly, it must be a sign of God’s happiness, since it couldn’t be a sign of God’s anger.” In actual fact, it’s all a matter of the atmosphere refracting light from a giant nuclear-fusion reaction, and made-up deities have nothing to do with it.
Just because a map seems to let you classify things, doesn’t mean it provides accurate causal explanations.
If we don’t know enough about how the mind works to say it is good at compermentalisation, we also don’t know enough to say it is bad at compartmentalisation
Your position requires you to be noncommittal about a lot of things. Maybe you are managing that.
The analogy with sunsets isn’t analogous, because we have the science as an alternative
I wouldn’t be able to tell if someone is a good mathematician, but I’d know that if they add 2 and 2 the normal way and get 5, they’re a bad one. It’s often a lot easier to detect incompetence, or at least some kinds of incompetence, than excellence.
Personally, I don’t think “compartmentalization” actually cuts reality at the joints. Surely the brain must solve a classification problem at some point, but it could easily “fall out” that your algorithms simply perform better if they classify things or situations between contextualized models—that is, if they “compartmentalize”—than if they try to build one humongous super-model for all possible things and situations.
No, it’s better to assume that folk-psychology doesn’t accurately map the mind. “Reversed stupidity is not intelligence.”
Your statement is equivalent to saying, “We’ve seen a beautiful sunset. Clearly, it must be a sign of God’s happiness, since it couldn’t be a sign of God’s anger.” In actual fact, it’s all a matter of the atmosphere refracting light from a giant nuclear-fusion reaction, and made-up deities have nothing to do with it.
Just because a map seems to let you classify things, doesn’t mean it provides accurate causal explanations.
If we don’t know enough about how the mind works to say it is good at compermentalisation, we also don’t know enough to say it is bad at compartmentalisation
Your position requires you to be noncommittal about a lot of things. Maybe you are managing that.
The analogy with sunsets isn’t analogous, because we have the science as an alternative
I wouldn’t be able to tell if someone is a good mathematician, but I’d know that if they add 2 and 2 the normal way and get 5, they’re a bad one. It’s often a lot easier to detect incompetence, or at least some kinds of incompetence, than excellence.
Is compartmentalisation supposed to be a competence or an incompetence, or neither?
Personally, I don’t think “compartmentalization” actually cuts reality at the joints. Surely the brain must solve a classification problem at some point, but it could easily “fall out” that your algorithms simply perform better if they classify things or situations between contextualized models—that is, if they “compartmentalize”—than if they try to build one humongous super-model for all possible things and situations.
But you don;t have proof of that theory, do you?
Your original thesis would support that theory, actually.
I havent made any object level claims about psychology.