I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person’s skill at system 2 reasoning just increases their resistance to ideas.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered,
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.
I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
Perhaps one or both of us misunderstands the model. Here is a better description of the two.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
This. Exactly this. YES.
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.