pjeby, sorry I wasn’t clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here’s Daniel Kahneman discussing them. I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they’ll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.
When I refer to belief, I’m not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses—the person knows the belief is irrational—and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn’t wrong, but since the person usually knows they’re irrational, they’re outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.
People are automatically resistant to being asked to question their beliefs. Usually it’s perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won’t abandon outright. We shouldn’t expect them to. It’s unrealistic.
What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We’ll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I’ll think about putting a post together on this.
I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
My hypothesis is that reasoning as we know it evolved as a mechanism to both persuade others, and to defend against being persuaded by others.
Consider priming, which works as long as you’re not aware of it and therefore defending against it. But it makes no sense to evolve a mechanism to avoid being primed, unless the priming mechanism were being exploited by our tribe-mates. (After all, they’re the only ones besides us with the language skill to trigger it.)
In other words, once we evolved language, we became more gullible, because we were now verbally suggestible. This would then have resulted in an arms race of intelligence to both persuade, and defend against persuasion, with tribal status and resources as the prize.
And once we evolved to the point of being able to defend ourselves against any belief-change we’re determined to avoid, the prize would’ve become being able to convince neutral bystanders who didn’t already have something at stake.
The system 1⁄2 distinctions cataloged by Stanovich & West don’t quite match my own observation, in that I consider any abstract processing to be system 2, whether it’s good reasoning or fallacious, and whether it’s cached or a work-in-progress. (Cached S2 reasoning isn’t demanding of brainpower, and in fact can be easily parroted back in many forms once an appropriate argument has been heard, without the user ever needing to figure it out for themselves.)
In my view, the primary functional purpose of human of reasoning is to persuade or prevent persuasion, with other uses being an extra bonus. So in this view, using system 2 for truly rational thought is actually an abuse of the system… which would explain why it’s so demanding of cognitive capacity, compared to using it as a generator of confabulation and rhetoric. And it also explains why it requires so much learning to use properly: it’s not what the hardware was put there for.
The S&W model is IMO a bit biased by the desire to find “normative” reasoning (i.e., correct reasoning) in the brain, even though there’s really no evolutionary reason for us to have truly rational thought or to be particularly open-minded. In fact, there’s every evolutionary reason for us to not be persuadable whenever we have something at stake, and to not reason things out in a truly fair or logical manner.
Hence, some of the attributes they give system 2 are (in my view) attributes of learned reasoning running on top of system 2 in real time, rather than native attributes of system 2 itself, or reflective of cached system 2 thinking.
Anyway, IAWYC re: the rest, I just wanted to clarify this particular bit.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it. (I’m significantly enough conceptually synesthetic that I don’t need it explained, and have never actually needed an especially good reference before.)
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
It’s also noteworthy that synesthesia appears to be a normal developmental phase; it seems pretty likely to me that I’m merely more aware of how my brain is processing things, rather than having a radically different mode of processing altogether.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it.
I’d certainly be interested in that. My own definitions are aimed at teaching people not to abstract away from experience, including emotional experience. Certainly there is some abstraction at that level, it’s just a different kind of abstraction (ISTM) than system 2 abstraction.
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
It’s entirely possible that my definition of system 1⁄2 is more than a little out of whack with yours or the original S&W definition, but under my definition it’s pretty easy to learn to distinguish S1 utterances from S2 utterances, at least within the context of mind hacking, where I or someone else is trying to find out what’s really going on in System 1 in relation to a topic, and distinguish it from System 2′s confabulated theories.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
Yep, it mostly uses nouns, simple verbs, relatedness catgegorizations (‘because’) , behavior categorizations (‘should’, ‘avoid with this degree of priority’), and a few semi-abstract concepts like ‘this week’. Surprisingly, I don’t often ‘see’ the concepts of good or bad—they seem to be more built-in to certain nouns and verbs, and changing my opinion of a thing causes it to ‘look’ completely different. (That’s also not the only thing that can cause a concept to change appearance—one of my closest friends has mellowed from a very nervous shade of orange to a wonderfully centered and calm medium-dark chocolate color over the course of the last year or so.)
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
Hmm… heh, it actually sounds like I just don’t use system 2, then.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
I have and do, actually, and there’s very little that’s ‘undesirable’ left in there that I’m aware of (an irrational but so far not problematic fear of teenagers and a rationally-based but problematic fear of mental health professionals and, by extension, doctors are the only two things that come to mind that I’d change, and I’ve already done significant work on the second or I wouldn’t be able to calmly have this conversation with you). The major limitation is that I can only see what’s at hand, and it takes a degree of concentration to do so. I can’t detangle my thought process directly while I’m trying to carry on a conversation, unless it’s directly related to exactly what I’m doing at the moment, and I can’t fix problems that I haven’t noticed or have forgotten about.
I’m going to be putting together a simple display on conceptual synesthesia for my Neuroversity project this week… I’ll be sure to send you a link when it’s done.
I’ve been thinking more about this… or, not really. One of the downsides to my particular mind-setup is that it takes a long time to retrieve things from long-term memory, but I did retrieve something interesting just now.
When I was younger, I think I did use system two moderately regularly. I do vaguely remember intentionally trying to ‘figure things out’ using non-synesthetic reasoning—before I realized that the synesthesia was both real and useful—and coming to conclusions. I very distinctly remember having a mindset more than once of “I made this decision, so this is what I’m going to do, whether it makes sense now or not”. I also remember that I was unable to retain the logic behind those decisions, which made me very inflexible about them—I couldn’t use new data to update my decision, because I didn’t know how I’d come to the conclusion or how the new data should fit in. Using that system is demanding enough that it simply wasn’t possible to re-do my logic every single time a potentially-relevant piece of data turned up, and in fact I couldn’t remember enough of my reasoning to even figure out which pieces of data were likely to be relevant. The resulting single-mindedness is much less useful than the ability to actually be flexible about your actions, and after having that forcibly pointed out by reality a few times, I stopped using that method altogether.
There does seem to be a degree of epistemic hygiene necessary to switch entirely to using system one, though. I do remember, vaguely, that one problem I had when I first started using system one for actual problems was that I was fairly easy to persuade—it took a while to really get comfortable with the idea that someone could have an opinion that was well-formed and made sense but still not be something that I would ‘have to’ support or even take into consideration, for example. Essentially my own concepts of what I wanted were not strong enough to handle being challenged directly, at first. (I got better.)
I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person’s skill at system 2 reasoning just increases their resistance to ideas.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered,
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.
pjeby, sorry I wasn’t clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here’s Daniel Kahneman discussing them. I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they’ll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.
When I refer to belief, I’m not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses—the person knows the belief is irrational—and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn’t wrong, but since the person usually knows they’re irrational, they’re outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.
People are automatically resistant to being asked to question their beliefs. Usually it’s perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won’t abandon outright. We shouldn’t expect them to. It’s unrealistic.
What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We’ll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I’ll think about putting a post together on this.
My hypothesis is that reasoning as we know it evolved as a mechanism to both persuade others, and to defend against being persuaded by others.
Consider priming, which works as long as you’re not aware of it and therefore defending against it. But it makes no sense to evolve a mechanism to avoid being primed, unless the priming mechanism were being exploited by our tribe-mates. (After all, they’re the only ones besides us with the language skill to trigger it.)
In other words, once we evolved language, we became more gullible, because we were now verbally suggestible. This would then have resulted in an arms race of intelligence to both persuade, and defend against persuasion, with tribal status and resources as the prize.
And once we evolved to the point of being able to defend ourselves against any belief-change we’re determined to avoid, the prize would’ve become being able to convince neutral bystanders who didn’t already have something at stake.
The system 1⁄2 distinctions cataloged by Stanovich & West don’t quite match my own observation, in that I consider any abstract processing to be system 2, whether it’s good reasoning or fallacious, and whether it’s cached or a work-in-progress. (Cached S2 reasoning isn’t demanding of brainpower, and in fact can be easily parroted back in many forms once an appropriate argument has been heard, without the user ever needing to figure it out for themselves.)
In my view, the primary functional purpose of human of reasoning is to persuade or prevent persuasion, with other uses being an extra bonus. So in this view, using system 2 for truly rational thought is actually an abuse of the system… which would explain why it’s so demanding of cognitive capacity, compared to using it as a generator of confabulation and rhetoric. And it also explains why it requires so much learning to use properly: it’s not what the hardware was put there for.
The S&W model is IMO a bit biased by the desire to find “normative” reasoning (i.e., correct reasoning) in the brain, even though there’s really no evolutionary reason for us to have truly rational thought or to be particularly open-minded. In fact, there’s every evolutionary reason for us to not be persuadable whenever we have something at stake, and to not reason things out in a truly fair or logical manner.
Hence, some of the attributes they give system 2 are (in my view) attributes of learned reasoning running on top of system 2 in real time, rather than native attributes of system 2 itself, or reflective of cached system 2 thinking.
Anyway, IAWYC re: the rest, I just wanted to clarify this particular bit.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it. (I’m significantly enough conceptually synesthetic that I don’t need it explained, and have never actually needed an especially good reference before.)
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
It’s also noteworthy that synesthesia appears to be a normal developmental phase; it seems pretty likely to me that I’m merely more aware of how my brain is processing things, rather than having a radically different mode of processing altogether.
I’d certainly be interested in that. My own definitions are aimed at teaching people not to abstract away from experience, including emotional experience. Certainly there is some abstraction at that level, it’s just a different kind of abstraction (ISTM) than system 2 abstraction.
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
It’s entirely possible that my definition of system 1⁄2 is more than a little out of whack with yours or the original S&W definition, but under my definition it’s pretty easy to learn to distinguish S1 utterances from S2 utterances, at least within the context of mind hacking, where I or someone else is trying to find out what’s really going on in System 1 in relation to a topic, and distinguish it from System 2′s confabulated theories.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
Yep, it mostly uses nouns, simple verbs, relatedness catgegorizations (‘because’) , behavior categorizations (‘should’, ‘avoid with this degree of priority’), and a few semi-abstract concepts like ‘this week’. Surprisingly, I don’t often ‘see’ the concepts of good or bad—they seem to be more built-in to certain nouns and verbs, and changing my opinion of a thing causes it to ‘look’ completely different. (That’s also not the only thing that can cause a concept to change appearance—one of my closest friends has mellowed from a very nervous shade of orange to a wonderfully centered and calm medium-dark chocolate color over the course of the last year or so.)
Hmm… heh, it actually sounds like I just don’t use system 2, then.
I have and do, actually, and there’s very little that’s ‘undesirable’ left in there that I’m aware of (an irrational but so far not problematic fear of teenagers and a rationally-based but problematic fear of mental health professionals and, by extension, doctors are the only two things that come to mind that I’d change, and I’ve already done significant work on the second or I wouldn’t be able to calmly have this conversation with you). The major limitation is that I can only see what’s at hand, and it takes a degree of concentration to do so. I can’t detangle my thought process directly while I’m trying to carry on a conversation, unless it’s directly related to exactly what I’m doing at the moment, and I can’t fix problems that I haven’t noticed or have forgotten about.
I’m going to be putting together a simple display on conceptual synesthesia for my Neuroversity project this week… I’ll be sure to send you a link when it’s done.
I’ve been thinking more about this… or, not really. One of the downsides to my particular mind-setup is that it takes a long time to retrieve things from long-term memory, but I did retrieve something interesting just now.
When I was younger, I think I did use system two moderately regularly. I do vaguely remember intentionally trying to ‘figure things out’ using non-synesthetic reasoning—before I realized that the synesthesia was both real and useful—and coming to conclusions. I very distinctly remember having a mindset more than once of “I made this decision, so this is what I’m going to do, whether it makes sense now or not”. I also remember that I was unable to retain the logic behind those decisions, which made me very inflexible about them—I couldn’t use new data to update my decision, because I didn’t know how I’d come to the conclusion or how the new data should fit in. Using that system is demanding enough that it simply wasn’t possible to re-do my logic every single time a potentially-relevant piece of data turned up, and in fact I couldn’t remember enough of my reasoning to even figure out which pieces of data were likely to be relevant. The resulting single-mindedness is much less useful than the ability to actually be flexible about your actions, and after having that forcibly pointed out by reality a few times, I stopped using that method altogether.
There does seem to be a degree of epistemic hygiene necessary to switch entirely to using system one, though. I do remember, vaguely, that one problem I had when I first started using system one for actual problems was that I was fairly easy to persuade—it took a while to really get comfortable with the idea that someone could have an opinion that was well-formed and made sense but still not be something that I would ‘have to’ support or even take into consideration, for example. Essentially my own concepts of what I wanted were not strong enough to handle being challenged directly, at first. (I got better.)
I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
Perhaps one or both of us misunderstands the model. Here is a better description of the two.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
This. Exactly this. YES.
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.