While reading through your links (BTW the fourth link doesn’t go where it’s supposed to go), I came across this comment by Duncan Sabien:
But what I, at least, meant to convey was something like “so, there are all these really good epistemic norms that are hard to lodge in your S1, and hard to operationalize in the moment. If you do this other thing, where you talk about cruxes and search for overlap, somehow magically that causes you to cleave closer to those epistemic norms, in practice.”
[...] But I claim that we’re basically saying “Do X” because of a borne-out-in-practice prediction that it will result in people doing Y, where Y are the good norms you’ve identified as seemingly unrelated to the double crux framework.
Is this something you’d endorse? If so, it seems like someone who already has these good epistemic norms might not get much out of double crux. Do you agree with that, and if so do you agree with G Gordon Worley III that I’m such a person?
I do think you can practice asking ‘what would actually change my mind?’ on your own without a partner, whenever you notice yourself believing something strongly.
I feel like my answer to that question would usually be “an argument that I haven’t heard yet, or have heard but forgot, or have heard but haven’t understood yet”. My preferred mode of “doing disagreement” is usually to exchange arguments, counter-arguments, counter-counter-arguments, …, questions and explanations of such arguments, etc., similar to a traditional adversarial debate, but with a goal of finding the truth for myself, the other person, and the audience, instead of trying to convince the other person or the audience of my position. E.g., I want to figure out if there’s any important arguments that I don’t yet know or understand, any flaws in my arguments that the other person can point out, and similarly if there’s any important arguments/flaws that I can point out to the other person or to the audience.
If your answer to my question above is “no” (i.e., there’s still something I can get out of learning “real doublecrux”) I’d be interested in further explanation of that. For example, are there any posts that compare the pros/cons of double crux with my way of “doing disagreement”?
Short answer is “if you don’t feel like you’re running into intractable disagreements that are important, and that something about your current conversational style is insufficient, I wouldn’t worry about doublecrux.”
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I *do* [weakly] expect doublecrux to also be relevant to AI Alignment debates, and think there might be things going on there that make it an improvement over “good faith adversarial debate.” (Once we’re not so behind on distillation, this might make sense to prioritize)
As noted earlier, doublecrux usually starts with model sharing, and I think “good faith adversarial debate” is a pretty fine format for model sharing. The main advantage of doublecrux over adversarial debate is
a) focusing on the parts that’d actually change your mind (i.e. if you detect someone posing a series of arguments that you predict won’t be persuasive to you, say ‘hey, my crux is more like this’ and switch to another topic entirely)
b) after you’ve completed the model sharing and all the relevant considerations, if you find yourselves staring at each other saying ’but obviously these considerations add up to position X” vs “obviously position Y”, then it becomes more important to focus on cruxes.
Thanks, this is really helpful for me to understand what doublecrux is for.
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I can’t think off the top of my head what conversations would be valuable to summarize. Do you have any specific suggestions?
(More directly addressing the Duncan Sabien quote: I roughly agree with the quote in terms of the immediate value of doublecrux. This sequence of posts was born from 2 years of arguing with LessWrong team members who had _something_ like ‘good faith’ and even ‘understanding of doublecrux in particular’, who nonetheless managed to disagree for months/years on deep intractable issues. And yes I think there’s something directly valuable about the doublecrux framework, when you find yourself in that situation)
While reading through your links (BTW the fourth link doesn’t go where it’s supposed to go), I came across this comment by Duncan Sabien:
Is this something you’d endorse? If so, it seems like someone who already has these good epistemic norms might not get much out of double crux. Do you agree with that, and if so do you agree with G Gordon Worley III that I’m such a person?
I feel like my answer to that question would usually be “an argument that I haven’t heard yet, or have heard but forgot, or have heard but haven’t understood yet”. My preferred mode of “doing disagreement” is usually to exchange arguments, counter-arguments, counter-counter-arguments, …, questions and explanations of such arguments, etc., similar to a traditional adversarial debate, but with a goal of finding the truth for myself, the other person, and the audience, instead of trying to convince the other person or the audience of my position. E.g., I want to figure out if there’s any important arguments that I don’t yet know or understand, any flaws in my arguments that the other person can point out, and similarly if there’s any important arguments/flaws that I can point out to the other person or to the audience.
If your answer to my question above is “no” (i.e., there’s still something I can get out of learning “real doublecrux”) I’d be interested in further explanation of that. For example, are there any posts that compare the pros/cons of double crux with my way of “doing disagreement”?
Short answer is “if you don’t feel like you’re running into intractable disagreements that are important, and that something about your current conversational style is insufficient, I wouldn’t worry about doublecrux.”
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I *do* [weakly] expect doublecrux to also be relevant to AI Alignment debates, and think there might be things going on there that make it an improvement over “good faith adversarial debate.” (Once we’re not so behind on distillation, this might make sense to prioritize)
As noted earlier, doublecrux usually starts with model sharing, and I think “good faith adversarial debate” is a pretty fine format for model sharing. The main advantage of doublecrux over adversarial debate is
a) focusing on the parts that’d actually change your mind (i.e. if you detect someone posing a series of arguments that you predict won’t be persuasive to you, say ‘hey, my crux is more like this’ and switch to another topic entirely)
b) after you’ve completed the model sharing and all the relevant considerations, if you find yourselves staring at each other saying ’but obviously these considerations add up to position X” vs “obviously position Y”, then it becomes more important to focus on cruxes.
Thanks, this is really helpful for me to understand what doublecrux is for.
I can’t think off the top of my head what conversations would be valuable to summarize. Do you have any specific suggestions?
(More directly addressing the Duncan Sabien quote: I roughly agree with the quote in terms of the immediate value of doublecrux. This sequence of posts was born from 2 years of arguing with LessWrong team members who had _something_ like ‘good faith’ and even ‘understanding of doublecrux in particular’, who nonetheless managed to disagree for months/years on deep intractable issues. And yes I think there’s something directly valuable about the doublecrux framework, when you find yourself in that situation)