A specific sub-point that I don’t want to be lost in the sea of my previous comment:
A related concern of mine is a ‘castle-and-keep’ esque defence of double crux which arises from equivocating between double crux per se and a host of admirable epistemic norms it may rely upon. Thus when defended double crux may transmogrify from “look for some C which if you changed your mind about you’d change your mind about B too” to a large set of incontrovertibly good epistemic practices “It is better to be collaborative rather than combative in discussion, and be willing to change ones mind, (etc.)” Yet even if double cruxing is associated with (or requires) these good practices, it is not a necessary condition for them.
I think there’s a third path here, which is something like “double crux may be an instrumentally useful tool in causing these admirable epistemic norms to take root, or to move from nominally-good to actually-practiced.”
I attempted in the original LW post, and attempt each time I teach double crux, to underscore that double crux has as its casus belli specific failure modes in normal discourse, and that the point is not, actually, to adhere rigidly to the specific algorithm, but rather that the algorithm highlights a certain productive way of thinking and being, and that while often my conversations don’t resemble pure double crux, I’ve always found that a given marginal step toward pure double crux produces value for me.
Which seems to fit with your understanding of the situation, except that you object to a claim that I and CFAR didn’t intend to make. You interpreted us (probably reasonably and fairly) as doing a sort of motte-and-bailey bait-and-switch. But what I, at least, meant to convey was something like “so, there are all these really good epistemic norms that are hard to lodge in your S1, and hard to operationalize in the moment. If you do this other thing, where you talk about cruxes and search for overlap, somehow magically that causes you to cleave closer to those epistemic norms, in practice.”
It’s like the sort of thing where, if I tell you that it’s an experiment about breathing, your breathing starts doing weird and unhelpful things. But if I tell you that it’s an experiment about calculation, I can get good data on your breathing while your attention is otherwise occupied.
Hopefully, we’re not being that deceptive. But I claim that we’re basically saying “Do X” because of a borne-out-in-practice prediction that it will result in people doing Y, where Y are the good norms you’ve identified as seemingly unrelated to the double crux framework. I’ve found that directly saying “Do Y” doesn’t produce the desired results, and so I say “Do X” and then feel victorious when Y results, but at the cost of being vulnerable to criticism along the lines of “Well, yeah, sure, but your intervention was pointed in the wrong direction.”
A specific sub-point that I don’t want to be lost in the sea of my previous comment:
I think there’s a third path here, which is something like “double crux may be an instrumentally useful tool in causing these admirable epistemic norms to take root, or to move from nominally-good to actually-practiced.”
I attempted in the original LW post, and attempt each time I teach double crux, to underscore that double crux has as its casus belli specific failure modes in normal discourse, and that the point is not, actually, to adhere rigidly to the specific algorithm, but rather that the algorithm highlights a certain productive way of thinking and being, and that while often my conversations don’t resemble pure double crux, I’ve always found that a given marginal step toward pure double crux produces value for me.
Which seems to fit with your understanding of the situation, except that you object to a claim that I and CFAR didn’t intend to make. You interpreted us (probably reasonably and fairly) as doing a sort of motte-and-bailey bait-and-switch. But what I, at least, meant to convey was something like “so, there are all these really good epistemic norms that are hard to lodge in your S1, and hard to operationalize in the moment. If you do this other thing, where you talk about cruxes and search for overlap, somehow magically that causes you to cleave closer to those epistemic norms, in practice.”
It’s like the sort of thing where, if I tell you that it’s an experiment about breathing, your breathing starts doing weird and unhelpful things. But if I tell you that it’s an experiment about calculation, I can get good data on your breathing while your attention is otherwise occupied.
Hopefully, we’re not being that deceptive. But I claim that we’re basically saying “Do X” because of a borne-out-in-practice prediction that it will result in people doing Y, where Y are the good norms you’ve identified as seemingly unrelated to the double crux framework. I’ve found that directly saying “Do Y” doesn’t produce the desired results, and so I say “Do X” and then feel victorious when Y results, but at the cost of being vulnerable to criticism along the lines of “Well, yeah, sure, but your intervention was pointed in the wrong direction.”