I’ve seen mentions of McGilchrist’s work pop up every now and then, but I’m still unclear on what exactly his model adds. In particular, I’m a little cautious of the illusion of understanding that may pop up whenever people just add neuroscience terms to an explanation which wouldn’t really need them.
E.g. “seeing a dog makes you feel afraid” versus “seeing a dog causes your amygdala to sound an alarm, making you feel afraid”. The second sentence basically just names a particular part of the brain which is involved in the fear response. This isn’t really a piece of knowledge that most people would do anything with (and it should hopefully have been obvious without saying that some part of the brain is involved in the response), but it can still feel like it had substantially more information.
A lot of the summaries of McGilchrist that I’ve seen so far raise similar alarms for me. You suggest that if everyone had his model as a common framework, then talking about various things would become more productive. But as far as I can tell, your description of his work just associates various mental processes with different parts of the brain. What’s the benefit of saying “the verbal and explicit mode of thought, which is associated with the left hemisphere”, as opposed to just “the verbal and explicit mode of thought”?
Here’s one example of a benefit: the left hemisphere is known to have major blindspots, that aren’t implied simply by saying “the verbal and explicit mode of thought.” Quoting McGilchrist (not sure about page number, I’m looking at Location 5400 in the 2nd edition on Kindle) describing some tests done by temporarily deactivating one hemisphere then the other, in healthy individuals:
Take the following example of a syllogism with a false premise:
Major premise: all monkeys climb trees;
Minor premise: the porcupine is a monkey;
Implied conclusion: the porcupine climbs trees.
Well — does it? As Deglin and Kinsbourne demonstrated, each hemisphere has its own way of approaching this question. At the outset of their experiment, when the intact individual is asked “Does the porcupine climb trees?”, she replies (using, of course, both hemispheres): “It does not climb, the porcupine runs on the ground; it’s prickly, it’s not a monkey.” [...] During experimental temporary hemisphere inactivations, the left hemisphere _of the very same individual (with the right hemisphere inactivated) replies that the conclusion is true: “the porcupine climbs trees since it is a monkey.” When the experimenter asks, “But is the porcupine a monkey?”, she replies that she knows it is not. When the syllogism is presented again, however, she is a little nonplussed, but replies in the affirmative, since “That’s what is written on the card.” When the right hemisphere of the same individual (with the left hemisphere inactivated) is asked if the syllogism is true, she replies: “How can it climb trees — it’s not a monkey, it’s wrong here!” If the experimenter points out that the conclusion must follow from the premises stated, she replies indignantly: “But the porcupine is not a monkey!”
In repeated situations, in subject after subject, when syllogisms with false premises, such as “All trees sink in water; balsa is a tree; balsa wood sinks in water,” or “Northern lights are often seen in Africa; Uganda is in Africa; Northern lights are seen in Uganda”, are presented, the same pattern emerges. When asked if the conclusion is true, the intact individual displays a common sense reaction: “I agree it seems to suggest so, but I know in fact it’s wrong.” The right hemisphere dismisses the false premises and deductions as absurd. But the left hemisphere sticks to the false conclusion, replying calmly to the effect that “that’s what it says here.”
In the left-hemisphere situation, it prioritizes the system, regardless of experience: it stays within the system of signs. Truth, for it, is coherence, because for it there is no world beyond, no Other, nothing outside the mind, to correspond with. “That’s what it says here.” So it corresponds with itself: in other words, it coheres. The right hemisphere prioritises what it learns from experience: the real state of existing things “out there”. For the right hemisphere, truth is not mere coherence, but correspondence with something other than itself. Truth, for it, is understood in the sense of being “true” to something, faithfulness to whatever it is that exists apart from ourselves.
However, it would be wrong to deduce from this that the right hemisphere just goes with what is familiar, adopting a comfortable conformity with experience to date. After all, one’s experience to date might be untrue to reality: then paying attention to logic would be an important way of moving away from from false customary assumption. And I have emphasized that it is the right hemisphere that helps us to get beyond the inauthentically familiar. The design of the above experiment specifically tests what happens when one is forced to choose between two paths to the truth in answering a question: using what one knows from experience or following a syllogism where the premises are blatantly false. The question was not whether the syllogism was structurally correct, but what actually was true. But in a different situation, where one is asked the different question “Is this syllogism structurally correct?”, even when the conclusion flies in the face of one’s experience, it is the right hemisphere which gets the answer correct, and the left hemisphere which is distracted by the familiarity of what it already thinks it knows, and gets the answer wrong. The common thread here is the role of the right hemisphere as “bullshit detector”. In the first case (answering the question “What is true here?”) detecting the bullshit involves using common sense. In the second case (answering “Is the logic here correct?”), detecting the bullshit involves resisting the obvious, the usual train of thought.
For me personally, working with McGilchrist’s model has dramatically improved my own internal bullshit-detection capacity. I’ve started to be able to sometimes smell the scent of rationalizations, even while the thoughts I’m having continue to feel true. This has been helpful for learning, for noticing when I’m being a jerk in relationships, and for noticing how I’m closing myself off to some line of thinking while debugging my code.
And the bullshit detection thing is just one element of it. The book relates dozens of other case studies on differences in way-of-being-and-perceiving of each hemisphere, and connects them with some core theories about the role of attention in cognition.
If you were surprised in reading this comment to discover that it’s not the left hemisphere that is best at syllogisms, then I would like to suggest there are important things that are known about the brain that you could learn by reading this book, that would help you think more effectively. (This also applies if you were not-particularly-surprised because your implicit prior was simply “hemispheres are irrelevant”; I was more in this camp.)
But in a different situation, where one is asked the different question “Is this syllogism structurally correct?”, even when the conclusion flies in the face of one’s experience, it is the right hemisphere which gets the answer correct, and the left hemisphere which is distracted by the familiarity of what it already thinks it knows, and gets the answer wrong.
Wait what? Surely the left/right hemispheres are accidentally reversed here? Or is the book saying that the left hemisphere always answers incorrectly, no matter what question you ask?
The book is saying that the left hemisphere answers incorrectly, in both cases! As I said, this is surprising.
I haven’t looked at the original research and found myself curious what would happen with a syllogism that is both invalid and has a false conclusion. My assumption is that either hemisphere would reject something like this:
Some cows are brown.
Some fish are iridescent.
Some cows are iridescent.
The left hemisphere seems to be where most of motivated cognition lives. If you’ve heard the bizarre stories about patients confabulating after strokes (eg “my limb isn’t paralyzed, I just don’t want to move it) this is almost unilaterally associated with damage to the right hemisphere. Many people, following Gazzinga’s lead, seem to have assumed this was just because someone with a left hemisphere stroke can’t talk, but if you leave words aside, it is apparent that people with left hemisphere damage are distressed about their paralyzed right arm, whereas people with right hemisphere damage are often in denial.
Likewise, part of the job of a well-functioning left hemisphere is to have blindspots. It’s so zoomed in on whatever it’s focused on that the rest of the world might as well not exist. If you’ve heard of the term “hemispatial neglect”, that leads to people shaving only half of their face, eating only half of their plate, or attempting to copy a drawing of an ordinary clock and ending up drawing something like this:
...then that’s again something that only happens when the left hemisphere is operating without the right (again, can also be shown in healthy patients by temporarily deactivating one hemisphere). The left hemisphere has a narrow focus of attention and only on the right side of things, and it doesn’t manage to even notice that it has omitted the other half, because as far as it’s concerned, the other half isn’t there. This is not a vision thing—asked to recall a familiar scene, such a patient may describe only the right half of it.
The book is saying that the left hemisphere answers incorrectly, in both cases! As I said, this is surprising.
That’s not just surprising, that’s absurd. I can absolutely believe the claim that the left hemisphere always takes what’s written for granted, and solves the syllogism formally. But the claim here is that the left hemisphere pays careful attention to the questions, solves them correctly, and then reverses the answer. Why would it do that? No mechanism is proposed.
I looked at the one paper that’s mentioned in the quote (Deglin and Kinsbourne), and they never ask the subjects whether the syllogisms are ‘structurally correct’; they only ask about the truth. And their main conclusion is that the left hemisphere always solves syllogisms formally, not that it’s always wrong.
If you’ve heard the bizarre stories about patients confabulating after strokes (eg “my limb isn’t paralyzed, I just don’t want to move it) this is almost unilaterally associated with damage to the right hemisphere.
Interesting, I didn’t know this only happened with the left hemisphere intact.
the claim here is that the left hemisphere pays careful attention to the questions, solves them correctly, and then reverses the answer.
Fwiw I also think that that is an absurd claim and I also think that nobody is actually claiming that here. The claim is something more like what has been claimed about System 1, “it takes shortcuts”, except in this case it’s roughly “to the left hemisphere, truth is coherence; logical coherence is preferred before map coherence, but both are preferred to anything that appears incoherent.”
I looked up the source for the “However” section and it’s not Deglin and Kinsbourne but Goel and Dolan, 2003). I looked it up and found it hard to read but my sense is that what it’s saying is:
An general, people are worse at answering the validity of a logical syllogism when it contradicts their beliefs. (This should surprise nobody.)
Different parts of the brain appear to be recruited depending on whether the content of a syllogism is familiar:
A recent fMRI study (Goel, Buchel, Frith & Dolan, 2000) has provided evidence that syllogistic reasoning is implemented in two distinct brain systems whose engagement is primarily a function of the presence or absence of meaningful content. During content-based syllogistic reasoning (e.g. All apples are red fruit; All red fruit are poisonous;[All apples are poisonous) a left hemisphere frontal and temporal lobe system is recruited. By contrast, in a formally identical reasoning task with arbitrary content (e.g. All A are B; All B are C;[All A are C) a bilateral parietal system is recruited.
(Note: this is them analyzing what part of the brain is recruited when the task is completed successfully.)
This 2003 study investigates whether that’s about [concrete vs abstract content] vs [belief-laden vs belief neutral content] and concludes that it’s about beliefs, and also < something new about the neuroanatomy >.
I think what’s being implied by McGilchrist citing this paper (although it’s unclear to me if this was tested as directly as the Deglin & Kinsbourne study) is that without access to the right hemisphere, the left hemisphere’s process would be even more biased, or something.
I’d be interested in your take if you read the 2000 or 2003 papers.
McGilchrist himself has said that it doesn’t matter if the neuroscience is all wrong, it makes a good metaphor. See this review, where McGilchrist’s “The Master and His Emissary” is quoted:
“If it could eventually be shown…that the two major ways, not just of thinking, but of being in the world, are not related to the two cerebral hemispheres, I would be surprised, but not unhappy. Ultimately what I have tried to point to is that the apparently separate ‘functions’ in each hemisphere fit together intelligently to form in each case a single coherent entity; that there are, not just currents here and there in the history of ideas, but consistent ways of being that persist across the history of the Western world, that are fundamentally opposed, though complementary, in what they reveal to us; and that the hemispheres of the brain can be seen as, at the very least, a metaphor for these…
What [Goethe’s Faust, Schopenhauer, Bergson, Scheler and Kant] all point to is the fundamentally divided nature of mental experience. When one puts that together with the fact that the brain is divided into two relatively independent chunks which just happen broadly to mirror the very dichotomies that are being pointed to – alienation versus engagement, abstraction versus incarnation, the categorical versus the unique, the general versus the particular, the part versus the whole, and so on – it seems like a metaphor that might have some literal truth. But if it turns out to be ‘just’ a metaphor, I will be content. I have a high regard for metaphor. It is how we come to understand the world.”
In which case, why is he peddling it? He is asserting the neuroscience as true. It matters whether it is true, because without it, he’s just another purveyor of intellectual artistic ramblings, like the ones he admires. And isn’t that dichotomising, in his terms, a left-brain thing to do?
I think that pretty much cuts the ground from under his whole system. It reduces the neuroscience story to a noble lie.
Brief reply about dog thing & just naming a part of the brain—I agree!
But saying “system 1” is also not useful unless you have a richer map of how system 1 works. In order for the emotional brain model to be useful, you need affordances for working with it. I got mine from the Bio-Emotive Framework, and since learning that model & technique, I’ve been more able to work with this stuff, whatever you want to call it, and part of working with it involves a level of identifying what’s going on at a level of detail beyond “S1″. There are also of course methods of working with this stuff that don’t require such a framework!
I’m appreciating you pointing this out, since it represents a way in which my comment was unhelpful—I didn’t actually give people these richer models, I just said “there are models out there that I’ve found much better than S1/S2 for talking about the same stuff”. I’ve pitched the models, but not actually sharing much of why I find them so useful. Although I’ve just added a long comment elaborating on some hemisphere stuff so hopefully that helps.
I’ve seen mentions of McGilchrist’s work pop up every now and then, but I’m still unclear on what exactly his model adds. In particular, I’m a little cautious of the illusion of understanding that may pop up whenever people just add neuroscience terms to an explanation which wouldn’t really need them.
E.g. “seeing a dog makes you feel afraid” versus “seeing a dog causes your amygdala to sound an alarm, making you feel afraid”. The second sentence basically just names a particular part of the brain which is involved in the fear response. This isn’t really a piece of knowledge that most people would do anything with (and it should hopefully have been obvious without saying that some part of the brain is involved in the response), but it can still feel like it had substantially more information.
A lot of the summaries of McGilchrist that I’ve seen so far raise similar alarms for me. You suggest that if everyone had his model as a common framework, then talking about various things would become more productive. But as far as I can tell, your description of his work just associates various mental processes with different parts of the brain. What’s the benefit of saying “the verbal and explicit mode of thought, which is associated with the left hemisphere”, as opposed to just “the verbal and explicit mode of thought”?
Here’s one example of a benefit: the left hemisphere is known to have major blindspots, that aren’t implied simply by saying “the verbal and explicit mode of thought.” Quoting McGilchrist (not sure about page number, I’m looking at Location 5400 in the 2nd edition on Kindle) describing some tests done by temporarily deactivating one hemisphere then the other, in healthy individuals:
For me personally, working with McGilchrist’s model has dramatically improved my own internal bullshit-detection capacity. I’ve started to be able to sometimes smell the scent of rationalizations, even while the thoughts I’m having continue to feel true. This has been helpful for learning, for noticing when I’m being a jerk in relationships, and for noticing how I’m closing myself off to some line of thinking while debugging my code.
And the bullshit detection thing is just one element of it. The book relates dozens of other case studies on differences in way-of-being-and-perceiving of each hemisphere, and connects them with some core theories about the role of attention in cognition.
If you were surprised in reading this comment to discover that it’s not the left hemisphere that is best at syllogisms, then I would like to suggest there are important things that are known about the brain that you could learn by reading this book, that would help you think more effectively. (This also applies if you were not-particularly-surprised because your implicit prior was simply “hemispheres are irrelevant”; I was more in this camp.)
Thanks! That does indeed sound valuable. Updated towards wanting to read that book.
Wait what? Surely the left/right hemispheres are accidentally reversed here? Or is the book saying that the left hemisphere always answers incorrectly, no matter what question you ask?
The book is saying that the left hemisphere answers incorrectly, in both cases! As I said, this is surprising.
I haven’t looked at the original research and found myself curious what would happen with a syllogism that is both invalid and has a false conclusion. My assumption is that either hemisphere would reject something like this:
Some cows are brown.
Some fish are iridescent.
Some cows are iridescent.
The left hemisphere seems to be where most of motivated cognition lives. If you’ve heard the bizarre stories about patients confabulating after strokes (eg “my limb isn’t paralyzed, I just don’t want to move it) this is almost unilaterally associated with damage to the right hemisphere. Many people, following Gazzinga’s lead, seem to have assumed this was just because someone with a left hemisphere stroke can’t talk, but if you leave words aside, it is apparent that people with left hemisphere damage are distressed about their paralyzed right arm, whereas people with right hemisphere damage are often in denial.
Likewise, part of the job of a well-functioning left hemisphere is to have blindspots. It’s so zoomed in on whatever it’s focused on that the rest of the world might as well not exist. If you’ve heard of the term “hemispatial neglect”, that leads to people shaving only half of their face, eating only half of their plate, or attempting to copy a drawing of an ordinary clock and ending up drawing something like this:
...then that’s again something that only happens when the left hemisphere is operating without the right (again, can also be shown in healthy patients by temporarily deactivating one hemisphere). The left hemisphere has a narrow focus of attention and only on the right side of things, and it doesn’t manage to even notice that it has omitted the other half, because as far as it’s concerned, the other half isn’t there. This is not a vision thing—asked to recall a familiar scene, such a patient may describe only the right half of it.
That’s not just surprising, that’s absurd. I can absolutely believe the claim that the left hemisphere always takes what’s written for granted, and solves the syllogism formally. But the claim here is that the left hemisphere pays careful attention to the questions, solves them correctly, and then reverses the answer. Why would it do that? No mechanism is proposed.
I looked at the one paper that’s mentioned in the quote (Deglin and Kinsbourne), and they never ask the subjects whether the syllogisms are ‘structurally correct’; they only ask about the truth. And their main conclusion is that the left hemisphere always solves syllogisms formally, not that it’s always wrong.
Interesting, I didn’t know this only happened with the left hemisphere intact.
Fwiw I also think that that is an absurd claim and I also think that nobody is actually claiming that here. The claim is something more like what has been claimed about System 1, “it takes shortcuts”, except in this case it’s roughly “to the left hemisphere, truth is coherence; logical coherence is preferred before map coherence, but both are preferred to anything that appears incoherent.”
I looked up the source for the “However” section and it’s not Deglin and Kinsbourne but Goel and Dolan, 2003). I looked it up and found it hard to read but my sense is that what it’s saying is:
An general, people are worse at answering the validity of a logical syllogism when it contradicts their beliefs. (This should surprise nobody.)
Different parts of the brain appear to be recruited depending on whether the content of a syllogism is familiar:
(Note: this is them analyzing what part of the brain is recruited when the task is completed successfully.)
This 2003 study investigates whether that’s about [concrete vs abstract content] vs [belief-laden vs belief neutral content] and concludes that it’s about beliefs, and also < something new about the neuroanatomy >.
I think what’s being implied by McGilchrist citing this paper (although it’s unclear to me if this was tested as directly as the Deglin & Kinsbourne study) is that without access to the right hemisphere, the left hemisphere’s process would be even more biased, or something.
I’d be interested in your take if you read the 2000 or 2003 papers.
McGilchrist himself has said that it doesn’t matter if the neuroscience is all wrong, it makes a good metaphor. See this review, where McGilchrist’s “The Master and His Emissary” is quoted:
In which case, why is he peddling it? He is asserting the neuroscience as true. It matters whether it is true, because without it, he’s just another purveyor of intellectual artistic ramblings, like the ones he admires. And isn’t that dichotomising, in his terms, a left-brain thing to do?
I think that pretty much cuts the ground from under his whole system. It reduces the neuroscience story to a noble lie.
Brief reply about dog thing & just naming a part of the brain—I agree!
But saying “system 1” is also not useful unless you have a richer map of how system 1 works. In order for the emotional brain model to be useful, you need affordances for working with it. I got mine from the Bio-Emotive Framework, and since learning that model & technique, I’ve been more able to work with this stuff, whatever you want to call it, and part of working with it involves a level of identifying what’s going on at a level of detail beyond “S1″. There are also of course methods of working with this stuff that don’t require such a framework!
I’m appreciating you pointing this out, since it represents a way in which my comment was unhelpful—I didn’t actually give people these richer models, I just said “there are models out there that I’ve found much better than S1/S2 for talking about the same stuff”. I’ve pitched the models, but not actually sharing much of why I find them so useful. Although I’ve just added a long comment elaborating on some hemisphere stuff so hopefully that helps.