Short answer is “if you don’t feel like you’re running into intractable disagreements that are important, and that something about your current conversational style is insufficient, I wouldn’t worry about doublecrux.”
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I *do* [weakly] expect doublecrux to also be relevant to AI Alignment debates, and think there might be things going on there that make it an improvement over “good faith adversarial debate.” (Once we’re not so behind on distillation, this might make sense to prioritize)
As noted earlier, doublecrux usually starts with model sharing, and I think “good faith adversarial debate” is a pretty fine format for model sharing. The main advantage of doublecrux over adversarial debate is
a) focusing on the parts that’d actually change your mind (i.e. if you detect someone posing a series of arguments that you predict won’t be persuasive to you, say ‘hey, my crux is more like this’ and switch to another topic entirely)
b) after you’ve completed the model sharing and all the relevant considerations, if you find yourselves staring at each other saying ’but obviously these considerations add up to position X” vs “obviously position Y”, then it becomes more important to focus on cruxes.
Thanks, this is really helpful for me to understand what doublecrux is for.
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I can’t think off the top of my head what conversations would be valuable to summarize. Do you have any specific suggestions?
(More directly addressing the Duncan Sabien quote: I roughly agree with the quote in terms of the immediate value of doublecrux. This sequence of posts was born from 2 years of arguing with LessWrong team members who had _something_ like ‘good faith’ and even ‘understanding of doublecrux in particular’, who nonetheless managed to disagree for months/years on deep intractable issues. And yes I think there’s something directly valuable about the doublecrux framework, when you find yourself in that situation)
Short answer is “if you don’t feel like you’re running into intractable disagreements that are important, and that something about your current conversational style is insufficient, I wouldn’t worry about doublecrux.”
In particular, I suspect in your case it’d be more valuable to spend marginal effort doing distillation work (summarizing conversations), then on doing conversations better.
I *do* [weakly] expect doublecrux to also be relevant to AI Alignment debates, and think there might be things going on there that make it an improvement over “good faith adversarial debate.” (Once we’re not so behind on distillation, this might make sense to prioritize)
As noted earlier, doublecrux usually starts with model sharing, and I think “good faith adversarial debate” is a pretty fine format for model sharing. The main advantage of doublecrux over adversarial debate is
a) focusing on the parts that’d actually change your mind (i.e. if you detect someone posing a series of arguments that you predict won’t be persuasive to you, say ‘hey, my crux is more like this’ and switch to another topic entirely)
b) after you’ve completed the model sharing and all the relevant considerations, if you find yourselves staring at each other saying ’but obviously these considerations add up to position X” vs “obviously position Y”, then it becomes more important to focus on cruxes.
Thanks, this is really helpful for me to understand what doublecrux is for.
I can’t think off the top of my head what conversations would be valuable to summarize. Do you have any specific suggestions?
(More directly addressing the Duncan Sabien quote: I roughly agree with the quote in terms of the immediate value of doublecrux. This sequence of posts was born from 2 years of arguing with LessWrong team members who had _something_ like ‘good faith’ and even ‘understanding of doublecrux in particular’, who nonetheless managed to disagree for months/years on deep intractable issues. And yes I think there’s something directly valuable about the doublecrux framework, when you find yourself in that situation)