When we view our minds through the lens of large language models (LLMs), with their static memory prompts and mutable context window, we find a fascinating model of belief and identity formation. Picture this in the context of a debate between an atheist and a creationist: how can this LLM-like model explain the hurdles in finding common ground?
Firstly, we must acknowledge our belief systems, much like an LLM, are slow to change. Guided by a lifetime of self-reinforcing experiences, our convictions, whether atheistic or creationist, develop a kind of inertia. They become resistant to sudden shifts, even in the face of a compelling argument.
Secondly, our beliefs generate self-protective outputs in our mind’s context window. These outputs act as a defense mechanism, safeguarding our core beliefs and often leading to reactionary responses instead of open-minded dialogue.
To break through these barriers, we need to engage the mutable part of our cognitive context. By introducing new social connections and experiences, we can gently nudge the other person to add to or reinterpret their static memory prompts. This might mean introducing your creationist friend to scientifically open-minded Christians, or even to promoters of intelligent design. Perhaps you can encourage them to study evolution not as “truth,” but as an alternative an interesting point of view, in much the same way that an atheist might take a passionate interest in world religions or an economist might work to master the point of view of a contradictory school of economic thought.
In the heat of a debate, however, this is rarely implemented. Instead, atheists and creationists alike tend to grapple with their fundamental disagreements, neglecting to demonstrate how an open-minded consideration of each other’s viewpoint can be reconciled with their existing identity.
To make headway in such debates, it’s not enough to present our viewpoint compellingly. We must also illustrate how this viewpoint can be entertained without upending the essential aspects of the other person’s identity. Only by doing this can we truly bridge the ideological divide and foster a richer, more empathetic dialogue, worthy of our complex cognitive architecture.
I think the important part is training a good simulation of a new worldview, not shifting weight to it or modifying an old worldview. To change your mind, you first need availability of something to change your mind to.
This is mostly motivation to engage and pushing aside protocols/norms/habits that interfere with continual efficient learning. The alternative is never understanding a non-caricature version of the target worldview/skillset/frame, which can persist for decades despite regular superficial engagement.
I think the important part is training a good simulation of a new worldview, not shifting weight to it or modifying an old worldview. To change your mind, you first need availability of something to change your mind to.
Do you mean that preserving your openness to new ideas is about being able to first try on new perspectives without necessarily adopting them as the truth? If so, I agree, and I think that captures another oft-neglected aspect of debate. We tend to lump together an explanation of what our worldview is, with a claim that our worldview is true.
When all participants in the debate view opportunities to debate the topic in question as rare and consequential, all the focus goes into fighting over some sort of perception of victory, rather than on trying to patiently understand the other person’s point of view. Usually, that requires allowing the other person to adopt, at least for a while, the perceived role of the expert or leader, and there’s always a good chance they’ll refuse to switch places with you and try and learn from you as well.
That said, I do think that there are often real asymmetries in the level of expertise that go unrecognized in debate, perhaps for Dunning Krueger reasons. Experts shouldn’t demand deference to their authority, and I don’t think that strategy works very well. Nevertheless, it’s important for experts to be able to use their expertise effectively in order to spread knowledge and the better decision-making that rests on it.
My take is that this requires experts to understand the identities and formative memories that underpin the incorrect beliefs of the other person, and conduct their discussion in such a way as to help the other person see how they can accept the expert’s knowledge while preserving their identity intact. Sometimes, that will not be possible. An Atheist probably can’t convince a Christian that there’s a way to keep their Christian identity intact while disbelieving in God.
Other times, it might be. Maybe an anti-vax person sees themselves as a defender of personal freedom, a skeptic, a person who questions authority, in harmony with nature, or protective of their children’s wellbeing.
We might guess that being protective of their children’s wellbeing isn’t the central issue, because both the pro- and anti-vax side are striving hard to reinforce that identity. Skepticism probably isn’t the main motive either, since there’s lots to be skeptical of in the anti-vax world.
But defending personal freedom, questioning authority, and being in harmony with nature seem to me to be identities more in tune with being anti-vax than pro-vax. I imagine large billboards saying “The COVID-19 vaccine works, and it’s still OK if you don’t get it” might be a small step toward addressing the personal freedom/question authority identity. And if we’d framed COVID-19 as a possible lab-generated superbug, with the mRNA vaccine harnessing your body’s natural infection-fighting response rather than being an example of big pharma at its novel and high-tech best, we might have done a better job of appealing to the ‘in harmony with nature’ identity.
The idea that a worldview needs to be in any way in tune with your own to be learned is one of those protocols/norms/habits that interfere with efficient learning. Until something is learned, it’s much less convenient to assess it, or to extract features/gears to reassemble as you see fit.
This is mostly about complicated distant collections of related ideas/skills. Changing your mind or believing is more central for smaller or decomposable ideas that can be adopted piecemeal, but not every idea can be straightforwardly bootstrapped. Thus utility of working on understanding perplexing things while suspending judgement. Adopting debate for this purpose is about figuring out misunderstandings about the content of a single worldview, even if its claims are actively disbelieved, not cruxes that connect different worldviews.
To me, you seem to be describing a pretty ideal version of consciously practiced rationality—it’s a good way to be or debate among those in scout mindset. That’s useful indeed!
I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don’t believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn’t agree with Paul Graham or Julia Galef’s framings of those terms or wouldn’t agree such descriptors apply to them.
The point is that there is a problem that can be mostly solved this way, bootstrapping understanding of a strange frame. (It’s the wrong tool if we are judging credence or details in a frame that’s already mostly understood, the more usual goal for meaningful debate.) It’s not needed if there is a way of getting there step-by-step, with each argument accepted individually on its own strength.
But sometimes there is no such straightforward way, like when learning a new language or a technical topic with its collection of assumed prerequisites. Then, it’s necessary to learn things without yet seeing how they could be relevant, occasionally absurd things or things believed to be false, in the hope that it will make sense eventually, after enough pieces are available to your own mind to assemble into a competence that allows correctly understanding individual claims.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve. Hence difficulty of breaking out of falsehood-ridden ideologies, even without an oppressive community that would enforce compliance.
I’m not quite following you—I’m struggling to see the connection between what you’re saying and what I’m saying. Like, I get the following points:
Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics.
Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims.
Sometimes, you have to separate understanding the argument from evaluating it.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
Learning prerequisites is an example that’s a bit off-center (sorry!), strangeness of a frame is not just unfamiliar facts and terms, but unexpected emphasis and contentious premises. This makes it harder to accept its elements than to build them up on their own island. Hanson’s recent podcast is a more central example for me.
By step-by-step learning I meant a process similar to reading a textbook, with chapters making sense in order, as you read them. As opposed to learning a language by reading a barely-understandable text, where nothing quite makes sense and won’t for some time.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
The “it” is the procedure of letting strange frames grow in your own mind without yet having a handle on how/whether they make sense. The sentence is a response to your suggesting that debate with a person not practicing this process is not a place for it. The point is I’m not sure what the better alternative would be. Turning a strange frame into a step-by-step argument often makes it even harder to grasp.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem.
Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me.
I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there—two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made—I suspect that identify conflict perceptions are to blame.
Your last shortform made it clearer that what you discuss could also be framed as seeking ways of getting the process started, and exploring obstructions.
A lot of this depends on the assumption of ability to direct skepticism internally, otherwise you risk stumbling into the derogatory senses of “having an open mind” (“so open your brains fall out”). Traditional skepticism puts the boundary around your whole person or even community. With a good starting point, this keeps a person relatively sane and lets in incremental improvements. With a bad starting point, it makes them irredeemable. This is a boundary of a memeplex that infests one’s mind, a convergently useful thing for most memeplexes to maintain. Any energy for engagement specific people would have is then spent on defending the boundary, only letting through what’s already permitted by the reigning memeplex. Thus debates between people from different camps are largely farcical, mostly recruitment drives for the audience.
A shorter path to self-improvement naturally turns skepticism inward, debugs your own thoughts that are well past that barrier. Unlike the outer barriers, this is an asymmetric weapon that reflects on the truth or falsity of ideas that are already accepted. But once it’s in place, it becomes much safer to lower the outer barriers, to let other memeplexes open embassies in your own mind. Then the job of skepticism is defending your own island in an archipelago of ideas hosted in your own mind that are all intuitively available to various degrees and allowed to grow in clarity, but often hopelessly contradict each other.
However, this is not a natural outcome of skepticism turning inwards. If the scope of skepticism remains too wide, greedily debugging everything, other islands wither before they gain sufficient clarity to contribute. So there are at least two widespread obstructions to archipelago mind. First, external skepticism that won’t let unapproved ideas in, justified by the damage they’d do in the absence of internal skepticism, with selection promoting memeplexes that end up encouraging such skepticism. Second, internal skepticism that targets the whole mind rather than a single island of your own beliefs, justified by its success in exterminating nonsense.
Making Beliefs Identity-Compatible
When we view our minds through the lens of large language models (LLMs), with their static memory prompts and mutable context window, we find a fascinating model of belief and identity formation. Picture this in the context of a debate between an atheist and a creationist: how can this LLM-like model explain the hurdles in finding common ground?
Firstly, we must acknowledge our belief systems, much like an LLM, are slow to change. Guided by a lifetime of self-reinforcing experiences, our convictions, whether atheistic or creationist, develop a kind of inertia. They become resistant to sudden shifts, even in the face of a compelling argument.
Secondly, our beliefs generate self-protective outputs in our mind’s context window. These outputs act as a defense mechanism, safeguarding our core beliefs and often leading to reactionary responses instead of open-minded dialogue.
To break through these barriers, we need to engage the mutable part of our cognitive context. By introducing new social connections and experiences, we can gently nudge the other person to add to or reinterpret their static memory prompts. This might mean introducing your creationist friend to scientifically open-minded Christians, or even to promoters of intelligent design. Perhaps you can encourage them to study evolution not as “truth,” but as an alternative an interesting point of view, in much the same way that an atheist might take a passionate interest in world religions or an economist might work to master the point of view of a contradictory school of economic thought.
In the heat of a debate, however, this is rarely implemented. Instead, atheists and creationists alike tend to grapple with their fundamental disagreements, neglecting to demonstrate how an open-minded consideration of each other’s viewpoint can be reconciled with their existing identity.
To make headway in such debates, it’s not enough to present our viewpoint compellingly. We must also illustrate how this viewpoint can be entertained without upending the essential aspects of the other person’s identity. Only by doing this can we truly bridge the ideological divide and foster a richer, more empathetic dialogue, worthy of our complex cognitive architecture.
I think the important part is training a good simulation of a new worldview, not shifting weight to it or modifying an old worldview. To change your mind, you first need availability of something to change your mind to.
This is mostly motivation to engage and pushing aside protocols/norms/habits that interfere with continual efficient learning. The alternative is never understanding a non-caricature version of the target worldview/skillset/frame, which can persist for decades despite regular superficial engagement.
Do you mean that preserving your openness to new ideas is about being able to first try on new perspectives without necessarily adopting them as the truth? If so, I agree, and I think that captures another oft-neglected aspect of debate. We tend to lump together an explanation of what our worldview is, with a claim that our worldview is true.
When all participants in the debate view opportunities to debate the topic in question as rare and consequential, all the focus goes into fighting over some sort of perception of victory, rather than on trying to patiently understand the other person’s point of view. Usually, that requires allowing the other person to adopt, at least for a while, the perceived role of the expert or leader, and there’s always a good chance they’ll refuse to switch places with you and try and learn from you as well.
That said, I do think that there are often real asymmetries in the level of expertise that go unrecognized in debate, perhaps for Dunning Krueger reasons. Experts shouldn’t demand deference to their authority, and I don’t think that strategy works very well. Nevertheless, it’s important for experts to be able to use their expertise effectively in order to spread knowledge and the better decision-making that rests on it.
My take is that this requires experts to understand the identities and formative memories that underpin the incorrect beliefs of the other person, and conduct their discussion in such a way as to help the other person see how they can accept the expert’s knowledge while preserving their identity intact. Sometimes, that will not be possible. An Atheist probably can’t convince a Christian that there’s a way to keep their Christian identity intact while disbelieving in God.
Other times, it might be. Maybe an anti-vax person sees themselves as a defender of personal freedom, a skeptic, a person who questions authority, in harmony with nature, or protective of their children’s wellbeing.
We might guess that being protective of their children’s wellbeing isn’t the central issue, because both the pro- and anti-vax side are striving hard to reinforce that identity. Skepticism probably isn’t the main motive either, since there’s lots to be skeptical of in the anti-vax world.
But defending personal freedom, questioning authority, and being in harmony with nature seem to me to be identities more in tune with being anti-vax than pro-vax. I imagine large billboards saying “The COVID-19 vaccine works, and it’s still OK if you don’t get it” might be a small step toward addressing the personal freedom/question authority identity. And if we’d framed COVID-19 as a possible lab-generated superbug, with the mRNA vaccine harnessing your body’s natural infection-fighting response rather than being an example of big pharma at its novel and high-tech best, we might have done a better job of appealing to the ‘in harmony with nature’ identity.
The idea that a worldview needs to be in any way in tune with your own to be learned is one of those protocols/norms/habits that interfere with efficient learning. Until something is learned, it’s much less convenient to assess it, or to extract features/gears to reassemble as you see fit.
This is mostly about complicated distant collections of related ideas/skills. Changing your mind or believing is more central for smaller or decomposable ideas that can be adopted piecemeal, but not every idea can be straightforwardly bootstrapped. Thus utility of working on understanding perplexing things while suspending judgement. Adopting debate for this purpose is about figuring out misunderstandings about the content of a single worldview, even if its claims are actively disbelieved, not cruxes that connect different worldviews.
To me, you seem to be describing a pretty ideal version of consciously practiced rationality—it’s a good way to be or debate among those in scout mindset. That’s useful indeed!
I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don’t believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn’t agree with Paul Graham or Julia Galef’s framings of those terms or wouldn’t agree such descriptors apply to them.
The point is that there is a problem that can be mostly solved this way, bootstrapping understanding of a strange frame. (It’s the wrong tool if we are judging credence or details in a frame that’s already mostly understood, the more usual goal for meaningful debate.) It’s not needed if there is a way of getting there step-by-step, with each argument accepted individually on its own strength.
But sometimes there is no such straightforward way, like when learning a new language or a technical topic with its collection of assumed prerequisites. Then, it’s necessary to learn things without yet seeing how they could be relevant, occasionally absurd things or things believed to be false, in the hope that it will make sense eventually, after enough pieces are available to your own mind to assemble into a competence that allows correctly understanding individual claims.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve. Hence difficulty of breaking out of falsehood-ridden ideologies, even without an oppressive community that would enforce compliance.
I’m not quite following you—I’m struggling to see the connection between what you’re saying and what I’m saying. Like, I get the following points:
Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics.
Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims.
Sometimes, you have to separate understanding the argument from evaluating it.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
Learning prerequisites is an example that’s a bit off-center (sorry!), strangeness of a frame is not just unfamiliar facts and terms, but unexpected emphasis and contentious premises. This makes it harder to accept its elements than to build them up on their own island. Hanson’s recent podcast is a more central example for me.
By step-by-step learning I meant a process similar to reading a textbook, with chapters making sense in order, as you read them. As opposed to learning a language by reading a barely-understandable text, where nothing quite makes sense and won’t for some time.
The “it” is the procedure of letting strange frames grow in your own mind without yet having a handle on how/whether they make sense. The sentence is a response to your suggesting that debate with a person not practicing this process is not a place for it. The point is I’m not sure what the better alternative would be. Turning a strange frame into a step-by-step argument often makes it even harder to grasp.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem.
Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me.
I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there—two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made—I suspect that identify conflict perceptions are to blame.
Your last shortform made it clearer that what you discuss could also be framed as seeking ways of getting the process started, and exploring obstructions.
A lot of this depends on the assumption of ability to direct skepticism internally, otherwise you risk stumbling into the derogatory senses of “having an open mind” (“so open your brains fall out”). Traditional skepticism puts the boundary around your whole person or even community. With a good starting point, this keeps a person relatively sane and lets in incremental improvements. With a bad starting point, it makes them irredeemable. This is a boundary of a memeplex that infests one’s mind, a convergently useful thing for most memeplexes to maintain. Any energy for engagement specific people would have is then spent on defending the boundary, only letting through what’s already permitted by the reigning memeplex. Thus debates between people from different camps are largely farcical, mostly recruitment drives for the audience.
A shorter path to self-improvement naturally turns skepticism inward, debugs your own thoughts that are well past that barrier. Unlike the outer barriers, this is an asymmetric weapon that reflects on the truth or falsity of ideas that are already accepted. But once it’s in place, it becomes much safer to lower the outer barriers, to let other memeplexes open embassies in your own mind. Then the job of skepticism is defending your own island in an archipelago of ideas hosted in your own mind that are all intuitively available to various degrees and allowed to grow in clarity, but often hopelessly contradict each other.
However, this is not a natural outcome of skepticism turning inwards. If the scope of skepticism remains too wide, greedily debugging everything, other islands wither before they gain sufficient clarity to contribute. So there are at least two widespread obstructions to archipelago mind. First, external skepticism that won’t let unapproved ideas in, justified by the damage they’d do in the absence of internal skepticism, with selection promoting memeplexes that end up encouraging such skepticism. Second, internal skepticism that targets the whole mind rather than a single island of your own beliefs, justified by its success in exterminating nonsense.