I want to specifically object to the last part of the post (the rest of it is fine and I agree almost completely with both the explicit positive claims and the implied normative ones).
But at the end, you talk about double-crux, and say:
And to try to double-crux with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.
Well, and why did you think you could take it for granted in the community? I don’t think that’s justified at all—post-rationalists and rationalist-adjacents aside!
For instance, while I don’t like to label myself as any kind of ‘-ist’—even a ‘rationalist’—the term applies to me, I think, better than it does to most people. (This is by no means a claim of any extraordinary rationalist accomplishments, please note; in fact, if pressed for a label, I’d have to say that I prefer the old one ‘aspiring rationalist’… but then, your given definition—and I agree with it—requires no particular accomplishment, only a perspective and an attempt to progress toward a certain goal. These things, I think, I can honestly claim.) Certainly you’ll find me to be among the first to argue for the philosophy laid out in the Sequences, and against any ‘post-rationalism’ or what have you.
But I have deep reservations about this whole ‘double-crux’ business, to say the least; and I have commented on this point, here on Less Wrong, and have not seen it established to my satisfaction that the technique is all that useful or interesting—and most assuredly have not seen any evidence that it ought to be taken as part of some “rationalist canon”, which you may reasonably expect any other ‘rationalist’ to endorse.
Now, you did say that you’d feel infuriated by having double-crux fail in either of those specific ways, so perhaps you would be ok with double-crux failing in any other way at all? But this does not seem likely to me; and, in any case, my own objection to the technique is similar to what you describe as the ‘rationalist-adjacent’ response (but different, of course, in that my objection is a principled one, rather than any mere unreflective lack of interest in examining beliefs too closely).
Lest you take this comment to be merely a stream of grumbling to no purpose, let me ask you this: is the bit about double-crux meant to be merely an example of a general tendency (of which many other examples may be found) for Less Wrong site/community members to fail to endorse the various foundational concepts and techniques of ‘LW-style’ rationality? Or, is the failure of double-crux indeed a central concern of yours, in writing this post? How important is that part of the post, in other words? Is the rest written in the service of that complaint specifically? Or is it separable?
There’s a big difference between a person who says double-cruxing is a bad tool and they don’t want to use it, and someone who agrees to it but then turns out not to actually be Doing The Thing.
And it’s not that ability-to-double-crux is synonymous with rationality, just that it’s the best proxy I could think of for what a typical frustrating interaction on this site is missing. Maybe I should specify that.
I would hazard a guess that you might have written the same comment with “debate” or “discussion” instead of double-crux, if doublecrux hadn’t been invented. Double crux is one particular way to resolve a disagreement, but I think the issue of “not willing to zoom in on beliefs” and “switching frames mid-conversation” come up in in other conversational paradigms.
(I’m not sure whether Said would have related objections to “zooming in on beliefs” or “switching frames” being Things Worth Doing, but seemed worth examining the distinction)
I think it would be very connotatively wrong to use those. I really need to say “the kind of conversation where you can examine claims together, and both parties are playing fair and trying to raise their true objections and not moving the goalposts”, and “double-crux” points at a subset of that. It doesn’t literally have to be double-crux, but it would take a new definition in order to have a handle for that, and three definitions in one post is already kind of pushing it.
There are rationalist-adjacents for whom collaborative truth-seeking on many topics would fail because they’re not interested in zooming in so close on a belief. There are post-rationalists for whom collaborative truth-seeking would fail because they can just switch frames on the conversation any time they’re feeling stuck. And to try to collaborate on truth-seeking with someone, only to have it fail in either of those ways, is an infuriating feeling for those of us who thought we could take it for granted in the community.
Unless I am misunderstanding, wouldn’t orthonormal say that “switching frames” is actually a thing not to do (and that it’s something post-rationalists do, which is in conflict with rationalist approaches)?
I believe the claim he was making (which I was endorsing), was to not switch frames in the middle of a conversation in a sort of slippery goal-post-moving way (especially repeatedly, without stopping to clarify that you’re doing that). That can result in poor communication.
I’ve previously talked a lot about noticing frame differences, which includes noticing when it’s time to switch frames, but within the rationalist paradigm, I’d argue this is a thing you should do intentionally when it’s appropriate for the situation, and flag when you’re doing it, and make sure that your interlocutor understands the new frame.
The rationalist way to handle multiple frames is to either treat them as different useful heuristics which can outperform naively optimizing from your known map, or as different hypotheses for the correct general frame, rather than as tactical gambits in a disagreement.
There’s a set of post-rationalist norms where switching frames isn’t a conversational gambit, it’s expected and part of generative process for solving problems and creating closeness. I would love to see people be able to switch between these different types of norms, as it can be equally frustrating when you’re trying to vibe with people who can only operate through rationalist frames.
I want to specifically object to the last part of the post (the rest of it is fine and I agree almost completely with both the explicit positive claims and the implied normative ones).
But at the end, you talk about double-crux, and say:
Well, and why did you think you could take it for granted in the community? I don’t think that’s justified at all—post-rationalists and rationalist-adjacents aside!
For instance, while I don’t like to label myself as any kind of ‘-ist’—even a ‘rationalist’—the term applies to me, I think, better than it does to most people. (This is by no means a claim of any extraordinary rationalist accomplishments, please note; in fact, if pressed for a label, I’d have to say that I prefer the old one ‘aspiring rationalist’… but then, your given definition—and I agree with it—requires no particular accomplishment, only a perspective and an attempt to progress toward a certain goal. These things, I think, I can honestly claim.) Certainly you’ll find me to be among the first to argue for the philosophy laid out in the Sequences, and against any ‘post-rationalism’ or what have you.
But I have deep reservations about this whole ‘double-crux’ business, to say the least; and I have commented on this point, here on Less Wrong, and have not seen it established to my satisfaction that the technique is all that useful or interesting—and most assuredly have not seen any evidence that it ought to be taken as part of some “rationalist canon”, which you may reasonably expect any other ‘rationalist’ to endorse.
Now, you did say that you’d feel infuriated by having double-crux fail in either of those specific ways, so perhaps you would be ok with double-crux failing in any other way at all? But this does not seem likely to me; and, in any case, my own objection to the technique is similar to what you describe as the ‘rationalist-adjacent’ response (but different, of course, in that my objection is a principled one, rather than any mere unreflective lack of interest in examining beliefs too closely).
Lest you take this comment to be merely a stream of grumbling to no purpose, let me ask you this: is the bit about double-crux meant to be merely an example of a general tendency (of which many other examples may be found) for Less Wrong site/community members to fail to endorse the various foundational concepts and techniques of ‘LW-style’ rationality? Or, is the failure of double-crux indeed a central concern of yours, in writing this post? How important is that part of the post, in other words? Is the rest written in the service of that complaint specifically? Or is it separable?
There’s a big difference between a person who says double-cruxing is a bad tool and they don’t want to use it, and someone who agrees to it but then turns out not to actually be Doing The Thing.
And it’s not that ability-to-double-crux is synonymous with rationality, just that it’s the best proxy I could think of for what a typical frustrating interaction on this site is missing. Maybe I should specify that.
I would hazard a guess that you might have written the same comment with “debate” or “discussion” instead of double-crux, if doublecrux hadn’t been invented. Double crux is one particular way to resolve a disagreement, but I think the issue of “not willing to zoom in on beliefs” and “switching frames mid-conversation” come up in in other conversational paradigms.
(I’m not sure whether Said would have related objections to “zooming in on beliefs” or “switching frames” being Things Worth Doing, but seemed worth examining the distinction)
I think it would be very connotatively wrong to use those. I really need to say “the kind of conversation where you can examine claims together, and both parties are playing fair and trying to raise their true objections and not moving the goalposts”, and “double-crux” points at a subset of that. It doesn’t literally have to be double-crux, but it would take a new definition in order to have a handle for that, and three definitions in one post is already kind of pushing it.
Any better ideas?
“Collaborative truth-seeking”?
Gotcha. I don’t know of a good word for the super-set that includes doublecrux but I see what you’re pointing at.
Unless I am misunderstanding, wouldn’t orthonormal say that “switching frames” is actually a thing not to do (and that it’s something post-rationalists do, which is in conflict with rationalist approaches)?
I believe the claim he was making (which I was endorsing), was to not switch frames in the middle of a conversation in a sort of slippery goal-post-moving way (especially repeatedly, without stopping to clarify that you’re doing that). That can result in poor communication.
I’ve previously talked a lot about noticing frame differences, which includes noticing when it’s time to switch frames, but within the rationalist paradigm, I’d argue this is a thing you should do intentionally when it’s appropriate for the situation, and flag when you’re doing it, and make sure that your interlocutor understands the new frame.
I agree with this comment.
The rationalist way to handle multiple frames is to either treat them as different useful heuristics which can outperform naively optimizing from your known map, or as different hypotheses for the correct general frame, rather than as tactical gambits in a disagreement.
There’s a set of post-rationalist norms where switching frames isn’t a conversational gambit, it’s expected and part of generative process for solving problems and creating closeness. I would love to see people be able to switch between these different types of norms, as it can be equally frustrating when you’re trying to vibe with people who can only operate through rationalist frames.
(just wanted to say I appreciated the way you put forth this comment – specifically flagging a disagreement while checking in on how central it was)