New post: Some things I think about Double Crux and related topics
I’ve spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I’m also probably the person who has spent the most time explicitly thinking about and working with CFAR’s Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I’m not going to go into detail about, defend, or substantiate, most of them.
The following are my own beliefs and do not necessarily represent CFAR, or anyone else.
I, of course, reserve the right to change my mind.
[Throughout I use “Double Crux” to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use “double crux” to refer to a proposition that is a shared crux for two people in a conversation.]
Here are some things I currently believe:
(General)
Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble.
Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people’s intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one’s conversational partner), etc.
In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).
In particular, my personal conversational facilitation repertoire is about 60% Double Crux-related techniques, and 40% other frameworks that are not strictly within the frame of Double Crux.
Just to say it clearly: I don’t think Double Crux is the only way to resolve disagreements, or the best way in all contexts. (Though I think it may be the best way, that I know of, in a plurality of common contexts?)
The ideal use case for Double Crux is when...
There are two people...
...who have a real, action-relevant, decision...
...that they need to make together (they can’t just do their own different things)...
...in which both people have strong, visceral intuitions.
Double Cruxes are almost always conversations between two people’s system 1′s.
You can Double Crux between two people’s unendorsed intuitions. (For instance, Alice and Bob are discussing a question about open borders. They both agree that neither of them are economists, and that neither of them trust their intuitions here, and that if they had to actually make this decision, it would be crucial to spend a lot of time doing research and examining the evidence and consulting experts. But nevertheless Alices current intuition leans in favor of open borders , and Bob’s current intuition leans against. This is a great starting point for a Double Crux.)
Double cruxes (as in a crux that is shared by both parties in a disagreement) are common, and useful. Most disagreements have implicit Double Cruxes, though identifying them can sometimes be tricky.
Conjunctive cruxes (I would change my mind about X, if I changed my mind about Y and about Z, but not if I only changed my mind about Y or about Z) are common.
Folks sometimes object that Double Crux won’t work, because their belief depends on a large number of considerations, each one of which has only a small impact on their overall belief, and so no one consideration is a crux. In practice, I find that there are double cruxes to be found even in cases where people expect their beliefs have this structure.
Theoretically, it makes sense that we would find double cruxes in these scenarios: if a person has a strong disagreement (including a disagreement of intuition) with someone else, we should expect that there are a small number of considerations doing most of the work of causing one person to think one thing and the other to think something else. It is improbable that each person’s beliefs depend on 50 factors, and for Alice, most of those 50 factors point in one direction, and for Bob, most of those 50 factors point in the other direction, unless the details of those factors are not independent. If considerations are correlated, you can abstract out the fact or belief that generates the differing predictions in all of those separate considerations. That “generating belief” is the crux.
That said, there is a different conversational approach that I sometimes use, which involves delineating all of the key considerations (then doing Goal-factoring style relevance and completeness checks), and then dealing with each consideration one at time (often via a fractal tree structure: listing the key considerations of each of the higher level considerations).
This approach absolutely requires paper, and skillful (firm, gentle) facilitation, because people will almost universally try and hop around between considerations, and they need to be viscerally assured that their other concerns are recorded and will be dealt with in due course, in order to engage deeply with any given consideration one at a time.
About 60% of the power of Double Crux comes from operationalizing or being specific.
I quite like Liron’s recent sequence on being specific. It re-reminded me of some basic things that have been helpful in several recent conversations. In particular, I like the move of having a conversational partner paint a specific, best case scenario, as a starting point for discussion.
(However, I’m concerned about Less Wrong readers trying this with a spirit of trying to “catch out” one’s conversational partner in inconsistency, instead of trying to understand what their partner wants to say, and thereby shooting themselves in the foot. I think the attitude of looking to “catch out” is usually counterproductive to both understanding and to persuasion. People rarely change their mind when they feel like you have trapped them in some inconsistency, but they often do change their mind if they feel like you’ve actually heard and understood their belief / what they are trying to say / what they are trying to defend, and then provide relevant evidence and argument. In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate the point your partner is trying to make, even if you suspect that their point is ultimately wrong and confused.)
As an aside, specificity and operationalization is also the engine that makes NVC work. Being specific is really super powerful.
Many (~50%) disagreements evaporate upon operationalization, but this happens less frequently than people think: and if you seem to agree about all of the facts, and agree about all specific operationalizations, but nevertheless seem to have differing attitudes about a question, that should be a flag. [I have a post that I’ll publish soon about this problem.]
You should be using paper when Double Cruxing. Keep track of the chain of Double Cruxes, and keep them in view.
People talk past each other all the time, and often don’t notice it. Frequently paraphrasing your current understanding of what your conversational partner is saying, helps with this. [There is a lot more to say about this problem, and details about how to solve it effectively].
I don’t endorse the Double Crux “algorithm” described in the canonical post. That is, I don’t think that the best way to steer a Double Crux conversation is to hew to those 5 steps in that order. Actually finding double cruxes is, in practice, much more complicated, and there are a large number of heuristics and TAPs that make the process work. I regard that algorithm as an early (and self conscious) attempt to delineate moves that would help move a conversation towards double cruxes.
This is my current best attempt at distilling the core moves that make Double Crux work, though this leaves out a lot.
In practice, I think that double cruxes most frequently emerge not from people independently generating their own list cruxes (though this is useful). Rather double cruxes usually emerge from the move of “checking if the point that your partner made is a crux for you.”
I strongly endorse facilitation of basically all tricky conversations, Double Crux oriented or not. It is much easier to have a third party track the meta and help steer, instead of the participants, who’s working memory is (and should be) full of the object level.
So called, “Triple Crux” is not a feasible operation. If you have more than two stakeholders, have two of them Double Crux, and then have one of those two Double Crux with the third person. Things get exponentially trickier as you add more people. I don’t think that Double Crux is a feasible method for coordinating more than ~ 6 people. We’ll need other methods for that.
Double Crux is much easier when both parties are interested in truth-seeking and in changing their mind, and are assuming good faith about the other. But, these are not strict prerequisites, and unilateral Double Crux is totally a thing.
People being defensive, emotional, or ego-filled does not preclude a productive Double Crux. Some particular auxiliary skills are required for navigating those situations, however.
If a person wants to get better at Double Crux skills, I recommend they cross-train with IDC. Any move that works in IDC you should try in Double Crux. Any move that works in Double Crux you should try in IDC. This will seem silly sometimes, but I am pretty serious about it, even in the silly-seeming cases. I’ve learned a lot this way.
I don’t think Double Crux necessarily runs into a problem of “black box beliefs” wherein one can no longer make progress because one or both parties comes down to a fundamental disagreement about System 1 heuristics/ models that they learned from some training data, but into which they can’t introspect. Almost always, there are ways to draw out those models.
The simplest way to do this (which is not the only or best way, depending on the circumstances, involves generating many examples and testing the “black box” against them. Vary the hypothetical situation to triangulate to the exact circumstances in which the “black box” outputs which suggestions.
I am not making the universal claim that one never runs into black box beliefs that can’t be dealt with.
Disagreements rarely come down to “fundamental value disagreements”. If you think that you have gotten to a disagreement about fundamental values, I suspect there was another conversational tact that would have been more productive.
Also, you can totally Double Crux about values. In practice, you can often treat values like beliefs: often there is some evidence that a person could observe, at least in principle, that would convince them to hold or not hold some “fundamental” value.
I am not making the claim that there are no such thing as fundamental values, or that all values are Double Crux-able.
A semi-esoteric point: cruxes are (or can be) contiguous with operationalizations. For instance, if I’m having a disagreement about whether advertising produces value on net, I might operationalize to “beer commercials, in particular, produce value on net”, which (if I think that operationalization actually captures the original question) is isomorphic to “The value of beer commercials is a crux for the value of advertising. I would change my mind about advertising in general, if I changed my mind about beer commercials.” (In this is an evidential crux, as opposed to the more common causal crux. (More on this distinction in future posts.))
People’s beliefs are strongly informed by their incentives. This makes me somewhat less optimistic about tools in this space than I would otherwise be, but I still think there’s hope.
There are a number of gaps in the repertoire of conversational tools that I’m currently aware of. One of the most important holes is the lack of a method for dealing with psychological blindspots. These days, I often run out of ability to make a conversation go well when we bump into a blindspot in one person or the other (sometimes, there seem to be psychological blindspots on both sides). Tools wanted, in this domain.
(The Double Crux class)
Knowing how to identify Double Cruxes can be kind of tricky, and I don’t think that most participants learn the knack from the 55 to 70 minute Double Crux class at a CFAR workshop.
Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I’m still playing around with how to do this most efficiently. (The “Basic Double Crux pattern” post is the distillation of my current approach.)
This is one development avenue that would particularly benefit from parallel search: If you feel like you “get” Double Crux, and can identify Double Cruxes fairly reliably and quickly, it might be helpful if you explicated your process.
That said, there are a lot of relevant compliments and sub-skills to Double Crux, and to bridging disagreements more generally.
The most important function of the Double Crux class at CFAR workshops is teaching and propagating the concept of a “crux”, and to a lesser extent, the concept of a “double crux”. These are very useful shorthands for one’s personal thinking and for discourse, which are great to have in the collective lexicon.
(Some other things)
Personally, I am mostly focused on developing deep methods (perhaps for training high-expertise specialists) that increase the range of problems of disagreements that the x-risk ecosystem can solve at all. I care more about this goal than about developing shallow tools that are useful “out of the box” for smart non-specialists, or in trying to change the conversational norms of various relevant communities (though both of those are secondary goals.)
I am highly skeptical of teaching many-to-most of the important skills for bridging deep disagreement, via anything other than ~one-on-one, in-person interaction.
In large part due to being prodded by a large number of people, I am polishing all my existing drafts of Double Crux stuff (and writing some new posts), and posting them here over the next few weeks. (There are already some drafts, still being edited, available on my blog.)
I have a standing offer to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that’s something you’re interested in.
People rarely change their mind when they feel like you have trapped them in some inconsistency [...] In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate [bolding mine—ZMD]
“People” in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it’s a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That’s a good thing! (As it is written of the fifth virtue, “Do not believe you do others a favor if you accept their arguments; the favor is to you.”)
Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don’t understand, then apparent “faults” you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Performing the behavior of trying to substantiate something that cannot, in fact, be substantiated (because it contains an inconsistency) is a waste of everyone’s time!
In general (but not universally) it is more productive to adopt a collaborative attitude
Can you say more about what you think the exceptions to the general-but-not-universal rule are? (Um, specifically.)
I would think that inconsistencies are easier to appriciate when they are in the central machinery. A rationalist might have more load bearing on their beliefs so most beliefs are central to atleast something but I think a centrality/point-of-communication check is more upside than downside to keep. Also cognitive time spent looking for inconsistencies could be better spent on more constructive activities. Then there is the whole class of heuristics which don’t even claim to be consistent. So the ability to pass by an inconsistency without hanging onto it will see use.
Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I’m still playing around with how to do this most efficiently. (The “Basic Double Crux pattern” post is the distillation of my current approach.)
How about doing this a few times on video? Watching the video might not be as effective as the one-on-one teaching but I would expect that watching a few 1-on-1 explanations would be a good way to learn about the process.
From a learning perspective it also helps a lot for reflecting on the technique. The early NLP folks spent a lot of time analysing tapes of people performing techniques to better understand the techniques.
I in fact recorded a test session of attempting to teach this via Zoom last weekend. This was the first time I tried a test session via Zoom however and there were a lot of kinks to work out, so I probably won’t publish that version in particular.
But yeah, I’m interested in making video recordings of some of this stuff and putting up online.
Thanks for mentioning conjugative cruxes. That was always my biggest objection to this technique. At least when I went through CFAR, the training completely ignored this possibility. It was clear that it often worked anyway, but the impression that I got was that it was the general frame which was important more than the precise methodology which at that time still seemed in need of refinement.
To me, it looks like the numbers in the General section go 1, 4, 5, 5, 6, 7, 8, 9, 3, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 3, 4, 2, 3, 4 (ignoring the nested numbers).
New post: Some things I think about Double Crux and related topics
I’ve spent a lot of my discretionary time working on the broad problem of developing tools for bridging deep disagreements and transferring tacit knowledge. I’m also probably the person who has spent the most time explicitly thinking about and working with CFAR’s Double Crux framework. It seems good for at least some of my high level thoughts to be written up some place, even if I’m not going to go into detail about, defend, or substantiate, most of them.
The following are my own beliefs and do not necessarily represent CFAR, or anyone else.
I, of course, reserve the right to change my mind.
[Throughout I use “Double Crux” to refer to the Double Crux technique, the Double Crux class, or a Double Crux conversation, and I use “double crux” to refer to a proposition that is a shared crux for two people in a conversation.]
Here are some things I currently believe:
(General)
Double Crux is one (highly important) tool/ framework among many. I want to distinguish between the the overall art of untangling and resolving deep disagreements and the Double Crux tool in particular. The Double Crux framework is maybe the most important tool (that I know of) for resolving disagreements, but it is only one tool/framework in an ensemble.
Some other tools/ frameworks, that are not strictly part of Double Crux (but which are sometimes crucial to bridging disagreements) include NVC, methods for managing people’s intentions and goals, various forms of co-articulation (helping to draw out an inchoate model from one’s conversational partner), etc.
In some contexts other tools are substitutes for Double Crux (ie another framework is more useful) and in some cases other tools are helpful or necessary compliments (ie they solve problems or smooth the process within the Double Crux frame).
In particular, my personal conversational facilitation repertoire is about 60% Double Crux-related techniques, and 40% other frameworks that are not strictly within the frame of Double Crux.
Just to say it clearly: I don’t think Double Crux is the only way to resolve disagreements, or the best way in all contexts. (Though I think it may be the best way, that I know of, in a plurality of common contexts?)
The ideal use case for Double Crux is when...
There are two people...
...who have a real, action-relevant, decision...
...that they need to make together (they can’t just do their own different things)...
...in which both people have strong, visceral intuitions.
Double Cruxes are almost always conversations between two people’s system 1′s.
You can Double Crux between two people’s unendorsed intuitions. (For instance, Alice and Bob are discussing a question about open borders. They both agree that neither of them are economists, and that neither of them trust their intuitions here, and that if they had to actually make this decision, it would be crucial to spend a lot of time doing research and examining the evidence and consulting experts. But nevertheless Alices current intuition leans in favor of open borders , and Bob’s current intuition leans against. This is a great starting point for a Double Crux.)
Double cruxes (as in a crux that is shared by both parties in a disagreement) are common, and useful. Most disagreements have implicit Double Cruxes, though identifying them can sometimes be tricky.
Conjunctive cruxes (I would change my mind about X, if I changed my mind about Y and about Z, but not if I only changed my mind about Y or about Z) are common.
Folks sometimes object that Double Crux won’t work, because their belief depends on a large number of considerations, each one of which has only a small impact on their overall belief, and so no one consideration is a crux. In practice, I find that there are double cruxes to be found even in cases where people expect their beliefs have this structure.
Theoretically, it makes sense that we would find double cruxes in these scenarios: if a person has a strong disagreement (including a disagreement of intuition) with someone else, we should expect that there are a small number of considerations doing most of the work of causing one person to think one thing and the other to think something else. It is improbable that each person’s beliefs depend on 50 factors, and for Alice, most of those 50 factors point in one direction, and for Bob, most of those 50 factors point in the other direction, unless the details of those factors are not independent. If considerations are correlated, you can abstract out the fact or belief that generates the differing predictions in all of those separate considerations. That “generating belief” is the crux.
That said, there is a different conversational approach that I sometimes use, which involves delineating all of the key considerations (then doing Goal-factoring style relevance and completeness checks), and then dealing with each consideration one at time (often via a fractal tree structure: listing the key considerations of each of the higher level considerations).
This approach absolutely requires paper, and skillful (firm, gentle) facilitation, because people will almost universally try and hop around between considerations, and they need to be viscerally assured that their other concerns are recorded and will be dealt with in due course, in order to engage deeply with any given consideration one at a time.
About 60% of the power of Double Crux comes from operationalizing or being specific.
I quite like Liron’s recent sequence on being specific. It re-reminded me of some basic things that have been helpful in several recent conversations. In particular, I like the move of having a conversational partner paint a specific, best case scenario, as a starting point for discussion.
(However, I’m concerned about Less Wrong readers trying this with a spirit of trying to “catch out” one’s conversational partner in inconsistency, instead of trying to understand what their partner wants to say, and thereby shooting themselves in the foot. I think the attitude of looking to “catch out” is usually counterproductive to both understanding and to persuasion. People rarely change their mind when they feel like you have trapped them in some inconsistency, but they often do change their mind if they feel like you’ve actually heard and understood their belief / what they are trying to say / what they are trying to defend, and then provide relevant evidence and argument. In general (but not universally) it is more productive to adopt a collaborative attitude of sincerely trying to help a person articulate, clarify, and substantiate the point your partner is trying to make, even if you suspect that their point is ultimately wrong and confused.)
As an aside, specificity and operationalization is also the engine that makes NVC work. Being specific is really super powerful.
Many (~50%) disagreements evaporate upon operationalization, but this happens less frequently than people think: and if you seem to agree about all of the facts, and agree about all specific operationalizations, but nevertheless seem to have differing attitudes about a question, that should be a flag. [I have a post that I’ll publish soon about this problem.]
You should be using paper when Double Cruxing. Keep track of the chain of Double Cruxes, and keep them in view.
People talk past each other all the time, and often don’t notice it. Frequently paraphrasing your current understanding of what your conversational partner is saying, helps with this. [There is a lot more to say about this problem, and details about how to solve it effectively].
I don’t endorse the Double Crux “algorithm” described in the canonical post. That is, I don’t think that the best way to steer a Double Crux conversation is to hew to those 5 steps in that order. Actually finding double cruxes is, in practice, much more complicated, and there are a large number of heuristics and TAPs that make the process work. I regard that algorithm as an early (and self conscious) attempt to delineate moves that would help move a conversation towards double cruxes.
This is my current best attempt at distilling the core moves that make Double Crux work, though this leaves out a lot.
In practice, I think that double cruxes most frequently emerge not from people independently generating their own list cruxes (though this is useful). Rather double cruxes usually emerge from the move of “checking if the point that your partner made is a crux for you.”
I strongly endorse facilitation of basically all tricky conversations, Double Crux oriented or not. It is much easier to have a third party track the meta and help steer, instead of the participants, who’s working memory is (and should be) full of the object level.
So called, “Triple Crux” is not a feasible operation. If you have more than two stakeholders, have two of them Double Crux, and then have one of those two Double Crux with the third person. Things get exponentially trickier as you add more people. I don’t think that Double Crux is a feasible method for coordinating more than ~ 6 people. We’ll need other methods for that.
Double Crux is much easier when both parties are interested in truth-seeking and in changing their mind, and are assuming good faith about the other. But, these are not strict prerequisites, and unilateral Double Crux is totally a thing.
People being defensive, emotional, or ego-filled does not preclude a productive Double Crux. Some particular auxiliary skills are required for navigating those situations, however.
This is a good start for the relevant skills.
If a person wants to get better at Double Crux skills, I recommend they cross-train with IDC. Any move that works in IDC you should try in Double Crux. Any move that works in Double Crux you should try in IDC. This will seem silly sometimes, but I am pretty serious about it, even in the silly-seeming cases. I’ve learned a lot this way.
I don’t think Double Crux necessarily runs into a problem of “black box beliefs” wherein one can no longer make progress because one or both parties comes down to a fundamental disagreement about System 1 heuristics/ models that they learned from some training data, but into which they can’t introspect. Almost always, there are ways to draw out those models.
The simplest way to do this (which is not the only or best way, depending on the circumstances, involves generating many examples and testing the “black box” against them. Vary the hypothetical situation to triangulate to the exact circumstances in which the “black box” outputs which suggestions.
I am not making the universal claim that one never runs into black box beliefs that can’t be dealt with.
Disagreements rarely come down to “fundamental value disagreements”. If you think that you have gotten to a disagreement about fundamental values, I suspect there was another conversational tact that would have been more productive.
Also, you can totally Double Crux about values. In practice, you can often treat values like beliefs: often there is some evidence that a person could observe, at least in principle, that would convince them to hold or not hold some “fundamental” value.
I am not making the claim that there are no such thing as fundamental values, or that all values are Double Crux-able.
A semi-esoteric point: cruxes are (or can be) contiguous with operationalizations. For instance, if I’m having a disagreement about whether advertising produces value on net, I might operationalize to “beer commercials, in particular, produce value on net”, which (if I think that operationalization actually captures the original question) is isomorphic to “The value of beer commercials is a crux for the value of advertising. I would change my mind about advertising in general, if I changed my mind about beer commercials.” (In this is an evidential crux, as opposed to the more common causal crux. (More on this distinction in future posts.))
People’s beliefs are strongly informed by their incentives. This makes me somewhat less optimistic about tools in this space than I would otherwise be, but I still think there’s hope.
There are a number of gaps in the repertoire of conversational tools that I’m currently aware of. One of the most important holes is the lack of a method for dealing with psychological blindspots. These days, I often run out of ability to make a conversation go well when we bump into a blindspot in one person or the other (sometimes, there seem to be psychological blindspots on both sides). Tools wanted, in this domain.
(The Double Crux class)
Knowing how to identify Double Cruxes can be kind of tricky, and I don’t think that most participants learn the knack from the 55 to 70 minute Double Crux class at a CFAR workshop.
Currently, I think I can teach the basic knack (not including all the other heuristics and skills) to a person in about 3 hours, but I’m still playing around with how to do this most efficiently. (The “Basic Double Crux pattern” post is the distillation of my current approach.)
This is one development avenue that would particularly benefit from parallel search: If you feel like you “get” Double Crux, and can identify Double Cruxes fairly reliably and quickly, it might be helpful if you explicated your process.
That said, there are a lot of relevant compliments and sub-skills to Double Crux, and to bridging disagreements more generally.
The most important function of the Double Crux class at CFAR workshops is teaching and propagating the concept of a “crux”, and to a lesser extent, the concept of a “double crux”. These are very useful shorthands for one’s personal thinking and for discourse, which are great to have in the collective lexicon.
(Some other things)
Personally, I am mostly focused on developing deep methods (perhaps for training high-expertise specialists) that increase the range of problems of disagreements that the x-risk ecosystem can solve at all. I care more about this goal than about developing shallow tools that are useful “out of the box” for smart non-specialists, or in trying to change the conversational norms of various relevant communities (though both of those are secondary goals.)
I am highly skeptical of teaching many-to-most of the important skills for bridging deep disagreement, via anything other than ~one-on-one, in-person interaction.
In large part due to being prodded by a large number of people, I am polishing all my existing drafts of Double Crux stuff (and writing some new posts), and posting them here over the next few weeks. (There are already some drafts, still being edited, available on my blog.)
I have a standing offer to facilitate conversations and disagreements (Double Crux or not) for rationalists and EAs. Email me at eli [at] rationality [dot] org if that’s something you’re interested in.
“People” in general rarely change their mind when they feel like you have trapped them in some inconsistency, but people using the double-crux method in the first place are going to be aspiring rationalists, right? Trapping someone in an inconsistency (if it’s a real inconsistency and not a false perception of one) is collaborative: the thing they were thinking was flawed, and you helped them see the flaw! That’s a good thing! (As it is written of the fifth virtue, “Do not believe you do others a favor if you accept their arguments; the favor is to you.”)
Obviously, I agree that people should try to understand their interlocutors. (If you performatively try to find fault in something you don’t understand, then apparent “faults” you find are likely to be your own misunderstandings rather than actual faults.) But if someone spots an actual inconsistency in my ideas, I want them to tell me right away. Performing the behavior of trying to substantiate something that cannot, in fact, be substantiated (because it contains an inconsistency) is a waste of everyone’s time!
Can you say more about what you think the exceptions to the general-but-not-universal rule are? (Um, specifically.)
I would think that inconsistencies are easier to appriciate when they are in the central machinery. A rationalist might have more load bearing on their beliefs so most beliefs are central to atleast something but I think a centrality/point-of-communication check is more upside than downside to keep. Also cognitive time spent looking for inconsistencies could be better spent on more constructive activities. Then there is the whole class of heuristics which don’t even claim to be consistent. So the ability to pass by an inconsistency without hanging onto it will see use.
How about doing this a few times on video? Watching the video might not be as effective as the one-on-one teaching but I would expect that watching a few 1-on-1 explanations would be a good way to learn about the process.
From a learning perspective it also helps a lot for reflecting on the technique. The early NLP folks spent a lot of time analysing tapes of people performing techniques to better understand the techniques.
I in fact recorded a test session of attempting to teach this via Zoom last weekend. This was the first time I tried a test session via Zoom however and there were a lot of kinks to work out, so I probably won’t publish that version in particular.
But yeah, I’m interested in making video recordings of some of this stuff and putting up online.
Thanks for mentioning conjugative cruxes. That was always my biggest objection to this technique. At least when I went through CFAR, the training completely ignored this possibility. It was clear that it often worked anyway, but the impression that I got was that it was the general frame which was important more than the precise methodology which at that time still seemed in need of refinement.
FYI the numbering in the (General) section is pretty off.
What do you mean? All the numbers are in order. Are you objecting to the nested numbers?
To me, it looks like the numbers in the General section go 1, 4, 5, 5, 6, 7, 8, 9, 3, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 2, 3, 3, 4, 2, 3, 4 (ignoring the nested numbers).
(this appears to be a problem where it displays differently on different browser/OS pairs)