The word “better” is doing a lot of work (more successful? Lower cost?), but in my personal experience and the experience of CFAR as a rationality org, double crux looks like the best all-around bet. (1) is a social move that sacrifices progress for happiness, and double crux is at least promising insofar as it lets us make that tradeoff go away. (2) sort of … is? … what double crux is doing—moving the disagreement from something unresolvable to something where progress can be made. (3) is absolutely a good move if you’re prioritizing social smoothness or happiness or whatever, but a death knell for anyone with reasons to care about the actual truth (such as those working on thorny, high-impact problems). (4) is anathema for the same reason as (3). And (presumably like you), we’re holding (5) as a valuable-but-costly tool in the toolkit and resorting to it as rarely as possible.
I would bet $100 of my own money that nothing “rated as better than double crux for navigating disagreement by 30 randomly selected active LWers” comes along in the next five years, and CFAR as an org is betting on it with both our street cred and with our actual allotment of time and resources (so, value in the high five figures in US dollars?).
I’d take your bet if it were for the general population, not LWers...
My issue with CFAR is it seems to be more focused on teaching a subset of people (LWers or people nearby in mindspace) how to communicate with each other than in teaching them how to communicate with people they are different from.
That’s an entirely defensible impression, but it’s also actually false in practice (demonstrably so when you see us at workshops or larger events). Correcting the impression (which again you’re justified in having) is a separate issue, but I consider the core complaint to be long-since solved.
I’m not sure what you mean and I’m not sure that I’d let a LWer falsify my hypothesis. There are clear systemic biases LWers have which are relatively apparent to outsiders. Ultimately I am not willing to pay CFAR to validate my claims and there are biases which emerge from people who are involved in CFAR whether as employees or people who take the courses (sunk cost as well as others).
I can imagine that you might have hesitated to list specifics to avoid controversy or mud-slinging, but I personally would appreciate concrete examples, as it’s basically my job to find the holes you’re talking about and try to start patching them.
The word “better” is doing a lot of work (more successful? Lower cost?), but in my personal experience and the experience of CFAR as a rationality org, double crux looks like the best all-around bet. (1) is a social move that sacrifices progress for happiness, and double crux is at least promising insofar as it lets us make that tradeoff go away. (2) sort of … is? … what double crux is doing—moving the disagreement from something unresolvable to something where progress can be made. (3) is absolutely a good move if you’re prioritizing social smoothness or happiness or whatever, but a death knell for anyone with reasons to care about the actual truth (such as those working on thorny, high-impact problems). (4) is anathema for the same reason as (3). And (presumably like you), we’re holding (5) as a valuable-but-costly tool in the toolkit and resorting to it as rarely as possible.
I would bet $100 of my own money that nothing “rated as better than double crux for navigating disagreement by 30 randomly selected active LWers” comes along in the next five years, and CFAR as an org is betting on it with both our street cred and with our actual allotment of time and resources (so, value in the high five figures in US dollars?).
I’d take your bet if it were for the general population, not LWers...
My issue with CFAR is it seems to be more focused on teaching a subset of people (LWers or people nearby in mindspace) how to communicate with each other than in teaching them how to communicate with people they are different from.
That’s an entirely defensible impression, but it’s also actually false in practice (demonstrably so when you see us at workshops or larger events). Correcting the impression (which again you’re justified in having) is a separate issue, but I consider the core complaint to be long-since solved.
I’m not sure what you mean and I’m not sure that I’d let a LWer falsify my hypothesis. There are clear systemic biases LWers have which are relatively apparent to outsiders. Ultimately I am not willing to pay CFAR to validate my claims and there are biases which emerge from people who are involved in CFAR whether as employees or people who take the courses (sunk cost as well as others).
I can imagine that you might have hesitated to list specifics to avoid controversy or mud-slinging, but I personally would appreciate concrete examples, as it’s basically my job to find the holes you’re talking about and try to start patching them.