To me, you seem to be describing a pretty ideal version of consciously practiced rationality—it’s a good way to be or debate among those in scout mindset. That’s useful indeed!
I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don’t believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn’t agree with Paul Graham or Julia Galef’s framings of those terms or wouldn’t agree such descriptors apply to them.
The point is that there is a problem that can be mostly solved this way, bootstrapping understanding of a strange frame. (It’s the wrong tool if we are judging credence or details in a frame that’s already mostly understood, the more usual goal for meaningful debate.) It’s not needed if there is a way of getting there step-by-step, with each argument accepted individually on its own strength.
But sometimes there is no such straightforward way, like when learning a new language or a technical topic with its collection of assumed prerequisites. Then, it’s necessary to learn things without yet seeing how they could be relevant, occasionally absurd things or things believed to be false, in the hope that it will make sense eventually, after enough pieces are available to your own mind to assemble into a competence that allows correctly understanding individual claims.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve. Hence difficulty of breaking out of falsehood-ridden ideologies, even without an oppressive community that would enforce compliance.
I’m not quite following you—I’m struggling to see the connection between what you’re saying and what I’m saying. Like, I get the following points:
Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics.
Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims.
Sometimes, you have to separate understanding the argument from evaluating it.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
Learning prerequisites is an example that’s a bit off-center (sorry!), strangeness of a frame is not just unfamiliar facts and terms, but unexpected emphasis and contentious premises. This makes it harder to accept its elements than to build them up on their own island. Hanson’s recent podcast is a more central example for me.
By step-by-step learning I meant a process similar to reading a textbook, with chapters making sense in order, as you read them. As opposed to learning a language by reading a barely-understandable text, where nothing quite makes sense and won’t for some time.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
The “it” is the procedure of letting strange frames grow in your own mind without yet having a handle on how/whether they make sense. The sentence is a response to your suggesting that debate with a person not practicing this process is not a place for it. The point is I’m not sure what the better alternative would be. Turning a strange frame into a step-by-step argument often makes it even harder to grasp.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem.
Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me.
I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there—two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made—I suspect that identify conflict perceptions are to blame.
Your last shortform made it clearer that what you discuss could also be framed as seeking ways of getting the process started, and exploring obstructions.
A lot of this depends on the assumption of ability to direct skepticism internally, otherwise you risk stumbling into the derogatory senses of “having an open mind” (“so open your brains fall out”). Traditional skepticism puts the boundary around your whole person or even community. With a good starting point, this keeps a person relatively sane and lets in incremental improvements. With a bad starting point, it makes them irredeemable. This is a boundary of a memeplex that infests one’s mind, a convergently useful thing for most memeplexes to maintain. Any energy for engagement specific people would have is then spent on defending the boundary, only letting through what’s already permitted by the reigning memeplex. Thus debates between people from different camps are largely farcical, mostly recruitment drives for the audience.
A shorter path to self-improvement naturally turns skepticism inward, debugs your own thoughts that are well past that barrier. Unlike the outer barriers, this is an asymmetric weapon that reflects on the truth or falsity of ideas that are already accepted. But once it’s in place, it becomes much safer to lower the outer barriers, to let other memeplexes open embassies in your own mind. Then the job of skepticism is defending your own island in an archipelago of ideas hosted in your own mind that are all intuitively available to various degrees and allowed to grow in clarity, but often hopelessly contradict each other.
However, this is not a natural outcome of skepticism turning inwards. If the scope of skepticism remains too wide, greedily debugging everything, other islands wither before they gain sufficient clarity to contribute. So there are at least two widespread obstructions to archipelago mind. First, external skepticism that won’t let unapproved ideas in, justified by the damage they’d do in the absence of internal skepticism, with selection promoting memeplexes that end up encouraging such skepticism. Second, internal skepticism that targets the whole mind rather than a single island of your own beliefs, justified by its success in exterminating nonsense.
To me, you seem to be describing a pretty ideal version of consciously practiced rationality—it’s a good way to be or debate among those in scout mindset. That’s useful indeed!
I am interested here mainly in how to better interface with people who participate in debate, and who may hold a lot of formal or informal power, but who do not subscribe to rationalist culture. People who don’t believe, for whatever reason, in the idea that you can and should learn ideas thoroughly before judging them. Those who keep their identities large and opt to stay in soldier mindset, even if they wouldn’t agree with Paul Graham or Julia Galef’s framings of those terms or wouldn’t agree such descriptors apply to them.
The point is that there is a problem that can be mostly solved this way, bootstrapping understanding of a strange frame. (It’s the wrong tool if we are judging credence or details in a frame that’s already mostly understood, the more usual goal for meaningful debate.) It’s not needed if there is a way of getting there step-by-step, with each argument accepted individually on its own strength.
But sometimes there is no such straightforward way, like when learning a new language or a technical topic with its collection of assumed prerequisites. Then, it’s necessary to learn things without yet seeing how they could be relevant, occasionally absurd things or things believed to be false, in the hope that it will make sense eventually, after enough pieces are available to your own mind to assemble into a competence that allows correctly understanding individual claims.
So it’s not a solution when stipulated as not applicable, but my guess is that when it’s useful, getting around it is even harder than changing habits in a way that allows adopting it. Which is not something that a single conversation can achieve. Hence difficulty of breaking out of falsehood-ridden ideologies, even without an oppressive community that would enforce compliance.
I’m not quite following you—I’m struggling to see the connection between what you’re saying and what I’m saying. Like, I get the following points:
Sometimes, you need to learn a bunch of prerequisites without experiencing them as useful, as when you learn your initial vocabulary for a language or the rudimentary concepts of statistics.
Sometimes, you can just get to a place of understanding an argument and evaluating it via patient, step-by-step evaluation of its claims.
Sometimes, you have to separate understanding the argument from evaluating it.
The part that confuses me is the third paragraph, first sentence, where you use the word “it” a lot and I can’t quite tell what “it” is referring to.
Learning prerequisites is an example that’s a bit off-center (sorry!), strangeness of a frame is not just unfamiliar facts and terms, but unexpected emphasis and contentious premises. This makes it harder to accept its elements than to build them up on their own island. Hanson’s recent podcast is a more central example for me.
By step-by-step learning I meant a process similar to reading a textbook, with chapters making sense in order, as you read them. As opposed to learning a language by reading a barely-understandable text, where nothing quite makes sense and won’t for some time.
The “it” is the procedure of letting strange frames grow in your own mind without yet having a handle on how/whether they make sense. The sentence is a response to your suggesting that debate with a person not practicing this process is not a place for it. The point is I’m not sure what the better alternative would be. Turning a strange frame into a step-by-step argument often makes it even harder to grasp.
Ah, that makes sense. Yes, I agree that carefully breaking down an argument into steps isn’t necessarily better than just letting it grow by bits and pieces. What I’m trying to emphasize is that if you can transmit an attitude of interest and openness in the topic, the classic idea of instilling passion in another person, then that solves a lot of the problem.
Underneath that, I think a big barrier to passion, interest and openness for some topic is a feeling that the topic conflicts with an identity. A Christian might perceive evolution as in conflict with their Christian identity, and it will be difficult or impossible for even the most inspiring evolutionist to instill interest in that topic without first overcoming the identity conflict. That’s what interests me.
I don’t think that identify conflict explains all failures to connect, not by a long shot. But when all the pieces are there—two smart people, talking at length, both with a lot of energy, and yet there’s a lot of rancor and no progress is made—I suspect that identify conflict perceptions are to blame.
Your last shortform made it clearer that what you discuss could also be framed as seeking ways of getting the process started, and exploring obstructions.
A lot of this depends on the assumption of ability to direct skepticism internally, otherwise you risk stumbling into the derogatory senses of “having an open mind” (“so open your brains fall out”). Traditional skepticism puts the boundary around your whole person or even community. With a good starting point, this keeps a person relatively sane and lets in incremental improvements. With a bad starting point, it makes them irredeemable. This is a boundary of a memeplex that infests one’s mind, a convergently useful thing for most memeplexes to maintain. Any energy for engagement specific people would have is then spent on defending the boundary, only letting through what’s already permitted by the reigning memeplex. Thus debates between people from different camps are largely farcical, mostly recruitment drives for the audience.
A shorter path to self-improvement naturally turns skepticism inward, debugs your own thoughts that are well past that barrier. Unlike the outer barriers, this is an asymmetric weapon that reflects on the truth or falsity of ideas that are already accepted. But once it’s in place, it becomes much safer to lower the outer barriers, to let other memeplexes open embassies in your own mind. Then the job of skepticism is defending your own island in an archipelago of ideas hosted in your own mind that are all intuitively available to various degrees and allowed to grow in clarity, but often hopelessly contradict each other.
However, this is not a natural outcome of skepticism turning inwards. If the scope of skepticism remains too wide, greedily debugging everything, other islands wither before they gain sufficient clarity to contribute. So there are at least two widespread obstructions to archipelago mind. First, external skepticism that won’t let unapproved ideas in, justified by the damage they’d do in the absence of internal skepticism, with selection promoting memeplexes that end up encouraging such skepticism. Second, internal skepticism that targets the whole mind rather than a single island of your own beliefs, justified by its success in exterminating nonsense.