But normal human beings are extremely good at compartmentalization. In other words they are extremely good at knowing when knowing the truth is going to be useful for their goals, and when it is not. This means that they are better than Less Wrongers at attaining their goals, because the truth does not get in the way.
If you really believe this, I’d love to see a post on a computational theory of compartmentalization, so you can explain for us all how the brain performs this magical trick.
I’m not sure what you mean by “magical trick.” For example, it’s pretty easy to know that it doesn’t matter (for the brain’s purposes) whether or not my politics is objectively correct or not; for those purposes it mainly matters whether I agree with my associates.
it’s pretty easy to know that it doesn’t matter (for the brain’s purposes) whether or not my politics is objectively correct or not
Bolded the part I consider controversial. If you haven’t characterized what sort of inference problem the brain is actually solving, then you don’t know the purposes behind its functionality. You only know what things feel like from the inside, and that’s unreliable.
Hell, if normative theories of rationality were more computational and less focused on sounding intellectual, I’d believe in those a lot more thoroughly, too.
If you have some some sort of distributed database with multiple updates from multiple sources, its likely to get into an inconsistent state unless you to measures to prevent that. So the way to achieve the magic of compartmentalised “beliefs” is to build a system like that, but don’t bother to add a consistency layer.
Whatever. The statement “But normal human beings are extremely good at compartmentalization” has little to do with understanding or implementation, so you would seem to be changing the subject.
Well no. I’m saying that folk-psychology has been extremely wrong before, so we shouldn’t trust it. You invoke folk-psychology to say that the mind uses compartmentalization to lie to itself in useful ways. I say that this folk-psychological judgement lacks explanatory power (though it certainly possesses status-attribution power: low status to those measly humans over there!) in the absence of a larger, well-supported theory behind it.
No, it’s better to assume that folk-psychology doesn’t accurately map the mind. “Reversed stupidity is not intelligence.”
Your statement is equivalent to saying, “We’ve seen a beautiful sunset. Clearly, it must be a sign of God’s happiness, since it couldn’t be a sign of God’s anger.” In actual fact, it’s all a matter of the atmosphere refracting light from a giant nuclear-fusion reaction, and made-up deities have nothing to do with it.
Just because a map seems to let you classify things, doesn’t mean it provides accurate causal explanations.
If we don’t know enough about how the mind works to say it is good at compermentalisation, we also don’t know enough to say it is bad at compartmentalisation
Your position requires you to be noncommittal about a lot of things. Maybe you are managing that.
The analogy with sunsets isn’t analogous, because we have the science as an alternative
I wouldn’t be able to tell if someone is a good mathematician, but I’d know that if they add 2 and 2 the normal way and get 5, they’re a bad one. It’s often a lot easier to detect incompetence, or at least some kinds of incompetence, than excellence.
Personally, I don’t think “compartmentalization” actually cuts reality at the joints. Surely the brain must solve a classification problem at some point, but it could easily “fall out” that your algorithms simply perform better if they classify things or situations between contextualized models—that is, if they “compartmentalize”—than if they try to build one humongous super-model for all possible things and situations.
If you really believe this, I’d love to see a post on a computational theory of compartmentalization, so you can explain for us all how the brain performs this magical trick.
I’m not sure what you mean by “magical trick.” For example, it’s pretty easy to know that it doesn’t matter (for the brain’s purposes) whether or not my politics is objectively correct or not; for those purposes it mainly matters whether I agree with my associates.
Bolded the part I consider controversial. If you haven’t characterized what sort of inference problem the brain is actually solving, then you don’t know the purposes behind its functionality. You only know what things feel like from the inside, and that’s unreliable.
Hell, if normative theories of rationality were more computational and less focused on sounding intellectual, I’d believe in those a lot more thoroughly, too.
If you have some some sort of distributed database with multiple updates from multiple sources, its likely to get into an inconsistent state unless you to measures to prevent that. So the way to achieve the magic of compartmentalised “beliefs” is to build a system like that, but don’t bother to add a consistency layer.
Perhaps he will, if you agree to also post your computational theory of how the brain works.
If you don’t have one, then it’s unreasonable to demand one.
That was several months ago.
Nice! I’ll bookmark that.
You think there is no evidence that it does?
You only really understand something when you understand how it’s implemented.
Whatever. The statement “But normal human beings are extremely good at compartmentalization” has little to do with understanding or implementation, so you would seem to be changing the subject.
Well no. I’m saying that folk-psychology has been extremely wrong before, so we shouldn’t trust it. You invoke folk-psychology to say that the mind uses compartmentalization to lie to itself in useful ways. I say that this folk-psychological judgement lacks explanatory power (though it certainly possesses status-attribution power: low status to those measly humans over there!) in the absence of a larger, well-supported theory behind it.
Is it better to assume non compartmentisation?
No, it’s better to assume that folk-psychology doesn’t accurately map the mind. “Reversed stupidity is not intelligence.”
Your statement is equivalent to saying, “We’ve seen a beautiful sunset. Clearly, it must be a sign of God’s happiness, since it couldn’t be a sign of God’s anger.” In actual fact, it’s all a matter of the atmosphere refracting light from a giant nuclear-fusion reaction, and made-up deities have nothing to do with it.
Just because a map seems to let you classify things, doesn’t mean it provides accurate causal explanations.
If we don’t know enough about how the mind works to say it is good at compermentalisation, we also don’t know enough to say it is bad at compartmentalisation
Your position requires you to be noncommittal about a lot of things. Maybe you are managing that.
The analogy with sunsets isn’t analogous, because we have the science as an alternative
I wouldn’t be able to tell if someone is a good mathematician, but I’d know that if they add 2 and 2 the normal way and get 5, they’re a bad one. It’s often a lot easier to detect incompetence, or at least some kinds of incompetence, than excellence.
Is compartmentalisation supposed to be a competence or an incompetence, or neither?
Personally, I don’t think “compartmentalization” actually cuts reality at the joints. Surely the brain must solve a classification problem at some point, but it could easily “fall out” that your algorithms simply perform better if they classify things or situations between contextualized models—that is, if they “compartmentalize”—than if they try to build one humongous super-model for all possible things and situations.
But you don;t have proof of that theory, do you?
Your original thesis would support that theory, actually.
I havent made any object level claims about psychology.