Your world model vs executive framing is interesting, but I suspect the two-system model described in Thinking Fast and Slow carves reality at the joints better. From reading Kahneman, I get the impression that it’s not so much our conscious desires to modify our world model that trip us up. In fact, the opposite appears to be true in the sense that conscious, deliberative thinking, when applied well, almost always leaves you with a better model of the world than your subconscious System 1 processes, which have a tendency to quickly generate an explanation of the evidence that seems coherent and pleasing and then stop (why keep thinking if you’ve already figured it out?)
I guess maybe your framing is a useful way to think about motivated cognition, which Kahneman doesn’t discuss as much. I think I stopped doing as much motivated cognition when I installed a process in my brain that watches my thought patterns for things that look like they might be stereotypical examples of motivated cognition and makes me anxious until I dig a little deeper. (For example, writing the previous sentence caused the process to fire because it was an instance of me claiming that I’m less biased than I used to be, and “I’m not biased the way everyone else is” matches my motivated cognition pattern recognizer.) I have a personal hunch that motivated cognition is a basic function call that your brain uses all over the place, e.g. “what company should I start?” or “what sounds appealing for dinner?” initiates the same basic sort of thought process as “why is this blog post whose conclusion I disagree with wrong?” If my hunch is correct, the best way to use motivated cognition in a lot of cases may be to run it in both directions, e.g. spend a timed minute thinking of reasons the blog post is wrong, then spend a timed minute thinking of reasons the blog post is right. I suspect you’ll generate more ideas this way than if you spend a timed two minutes in directionless mulling.
I guess if anyone wants to install this observer process in themselves, probably the most general way to do it is flag instances where you find yourself searching for evidence to support a particular conclusion.
Another story about how confirmation bias arises is through the global approach/avoid assessments generated by System 1. Under this view, it’s not so much motivated cognition that gets you… even if you set out with the intention of seeing the world the way it really is, you’ll find yourself naturally shying away from explanations that make you uncomfortable and devoting less thought to them, while devoting more thought to pleasant explanations. On a community site like Less Wrong, say, users could find themselves mysteriously procrastinating more on writing a post that disputes a popularly held view than a post which no one is likely to disagree with, even when their conscious deliberative judgements about the value of writing each post are identical. Under this view, stoic resolve to think thoughts even when they seem uncomfortable would be key. If this prediction is correct, giving people the stoic resolve to overcome aversions would help them in many areas of their life and in particular help them fight confirmation bias.
I guess another strategy would be rather than learning to overcome aversions, reframe potentially uncomfortable truths by reminding yourself that you want to know what’s true?
By the way, I’m not sure why Thinking Fast and Slow is not discussed here on Less Wrong more. It’s such a fabulous book, and compares quite favorably with the sequences in my view. I found the content overlap to be surprisingly small though; I suspect it is worth many peoples’ time to read both.
Your world model vs executive framing is interesting, but I suspect the two-system model described in Thinking Fast and Slow carves reality at the joints better. From reading Kahneman, I get the impression that it’s not so much our conscious desires to modify our world model that trip us up. In fact, the opposite appears to be true in the sense that conscious, deliberative thinking, when applied well, almost always leaves you with a better model of the world than your subconscious System 1 processes, which have a tendency to quickly generate an explanation of the evidence that seems coherent and pleasing and then stop (why keep thinking if you’ve already figured it out?)
I guess maybe your framing is a useful way to think about motivated cognition, which Kahneman doesn’t discuss as much. I think I stopped doing as much motivated cognition when I installed a process in my brain that watches my thought patterns for things that look like they might be stereotypical examples of motivated cognition and makes me anxious until I dig a little deeper. (For example, writing the previous sentence caused the process to fire because it was an instance of me claiming that I’m less biased than I used to be, and “I’m not biased the way everyone else is” matches my motivated cognition pattern recognizer.) I have a personal hunch that motivated cognition is a basic function call that your brain uses all over the place, e.g. “what company should I start?” or “what sounds appealing for dinner?” initiates the same basic sort of thought process as “why is this blog post whose conclusion I disagree with wrong?” If my hunch is correct, the best way to use motivated cognition in a lot of cases may be to run it in both directions, e.g. spend a timed minute thinking of reasons the blog post is wrong, then spend a timed minute thinking of reasons the blog post is right. I suspect you’ll generate more ideas this way than if you spend a timed two minutes in directionless mulling.
I guess if anyone wants to install this observer process in themselves, probably the most general way to do it is flag instances where you find yourself searching for evidence to support a particular conclusion.
Another story about how confirmation bias arises is through the global approach/avoid assessments generated by System 1. Under this view, it’s not so much motivated cognition that gets you… even if you set out with the intention of seeing the world the way it really is, you’ll find yourself naturally shying away from explanations that make you uncomfortable and devoting less thought to them, while devoting more thought to pleasant explanations. On a community site like Less Wrong, say, users could find themselves mysteriously procrastinating more on writing a post that disputes a popularly held view than a post which no one is likely to disagree with, even when their conscious deliberative judgements about the value of writing each post are identical. Under this view, stoic resolve to think thoughts even when they seem uncomfortable would be key. If this prediction is correct, giving people the stoic resolve to overcome aversions would help them in many areas of their life and in particular help them fight confirmation bias.
I guess another strategy would be rather than learning to overcome aversions, reframe potentially uncomfortable truths by reminding yourself that you want to know what’s true?
By the way, I’m not sure why Thinking Fast and Slow is not discussed here on Less Wrong more. It’s such a fabulous book, and compares quite favorably with the sequences in my view. I found the content overlap to be surprisingly small though; I suspect it is worth many peoples’ time to read both.