My take on “why isn’t Double Crux getting more uptake”:
This ‘Double Crux’ thing seems like a complicated technique/process/something, with:
benefits that are nothing close to manifestly clear from the description
no clear, public examples of anyone using it (much less, successfully)
no endorsements from anyone whose opinion I respect (like Scott Alexander or Eliezer—or perhaps Eliezer did endorse it? but then I guess I wouldn’t ever know about it; such is the downside of using Facebook…)
There does not seem to be any reason why I should pay attention it. That it’s not getting uptake seems to require little explanation; it’s the default outcome that I would expect.
(Also, it comes from CFAR, which is an anti-endorsement. This probably wouldn’t matter if all, or even any, of the above three things were different; but as is, for me, it’s the only thing influencing my inclination to really look deeply into the whole matter, and that influence is in the downward direction…)
[Note from the Sunshine Regiment] A lot has happened in this thread, I’m going to comment at second-to-top level so this gets as seen as possible while keeping its context.
In a nutshell
Yes, there is an obligation to be prosocial here.
There’s a lot of room for debate on what prosocial means and what trades-offs are worth it. This Guide To Comments is a start but insufficient. We welcome input from people as we figure this out.
I’m really torn on the particular comment “Also, it comes from CFAR, which is an anti-endorsement”. I want it to be as cheap as possible to criticize the in-group on Less Wrong, because so many other forces are making it expensive. So let’s be be very clear that
sharing a negative opinion is not in and of itself anti-social.
But as several people have pointed out, this opinion was shared in a way that generated a lot of unnecessary friction. A simple “I think that...” or ”...for me” would have done a great deal to resolve this problem.
The mod team is in private contact with Said over this issue.
Re: “it comes from CFAR, which is an anti-endorsement.”
I find that a large majority of people who have a moderate-to-strong negative opinion of CFAR have either a) never subjected that opinion to falsification or b) not checked in since forming the opinion a long time ago.
Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field.
Most of the updates come in one of the following forms:
Ah, okay, CFAR’s made significant improvements along this axis that I was right to criticize it on.
Ah, okay, CFAR is aware that this attribute that it has isn’t ideal; I thought they were proceeding in ignorance but in fact they’re making a cost-benefit decision and while I might disagree with their weighting I am less concerned that they’re blind or stupid.
Ah, okay, this criticism I had was based on assumptions that are simply false, or on information that is simply inaccurate, and while CFAR maybe deserves some blame for imperfect image management and creating-or-allowing-others-to-create those impressions, the problem I thought existed literally doesn’t exist.
Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions, I’m happy to make double crux motions unilaterally from my end as we do so, and then you’d have at least half of a public instance of double crux. (I won’t insist that you use the frame yourself until you’re at least convinced that it has potential.)
I do note that my mainline prediction for “this doesn’t work or doesn’t happen” is something like “Said claims that it’s not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models.” That seems fair and plausibly correct, but if that’s the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.
Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions
I am not opposed to this, per se, but—
I do note that my mainline prediction for “this doesn’t work or doesn’t happen” is something like “Said claims that it’s not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models.” That seems fair and plausibly correct, but if that’s the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.
I’m afraid I have to object to this. The following aren’t equivalent:
It’s not worth my time an attention to engage with you, right now, in this context and fashion
vs.
It’s not worth my time to re-examine [“repair”? why “repair”? this seems to assume the outcome!] my impression of CFAR
Nor are these equivalent:
I am unwilling to engage with you about this [whether “here and now” or “anywhere, ever”]
vs.
I subject my views on this topic to no falsification of any kind [or do you hold that discussing the matter with you is the only possible way to gain accurate information or insight into your organization’s nature / activities / whatever?]
That said, I am willing to devote some effort to this.
Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field.
I believe you.
(I do not, however, think that this is as informative as it may seem, for various reasons which may perhaps come up in our discussion.)
Before we get deeper into this topic, may I ask—these interactions wherein you’ve convinced people to “come over to your side”, so to speak—have they taken place in person, or online? If the latter, are any public records of this available? (To be clear, I do not ask this because I doubt what you say about having persuaded people—I really do not.)
I appreciate pretty much everything about your reply up above.
Agreement that there was a false equivalency re: right now vs. ever.
Agreement that my phrasing presupposed an outcome (though that makes sense when you take the context of “the guy talking is the curriculum director at CFAR”). I predict that outcome, optimistically, but in fact the actual target should be and is “investigate” not “repair.”
Unfortunately for the goal of record-keeping and evidence-creation, most of those interactions have taken place in person. I could generate stories about what they’re like, but a better option seems to be “start taking notes now when they happen, and ask permission to make said notes public with reasonable anonymity.”
Thanks for responding 100% positively/exactly as I would hope a LWer would respond. I’d love it if you let me know if I myself am not living up to that standard, as you gently did above.
Re: the previous interactions: that no notes from them are available is not the problem, nor would have notes help in any meaningful way. (Plus—and I really hate to be so blunt about this, but—notes can say whatever the note-taker, or even the note-poster-to-a-public-website, wants them to say! I’m not seriously suggesting falsification of anecdotal evidence, and as I say below, this is not really my primary concern here, but from the appearance-of-propriety perspective, having notes is not a great situation.)
No, the reason I asked about whether the cited interactions took place in person is certainly not disbelief or lack of evidence; and it is only in lesser part the desire to examine the interactions and see what I can conclude from them. The real reason is that an interaction in person is tremendously different from an interaction via a web forum (like this one)!!
These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!). The point, in any case, is that viewed in light of these differences, your track record of convincing nay-sayers, while undoubtedly real, should be much less persuasive, even to yourself, than you imply it to be.
It would be very different if you could point us to an online exchange, where you, and a serious and thoughtful interlocutor, took the time to compose comments and replies back and forth—the paradigmatic example of such, around here, being the Yudkowsky–Hanson “AI Foom” debate. (Ah, but how did that one turn out, eh?)
These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!
I request this enumeration, if your offer extends to interlopers and not just Duncan.
(The differences I can think of are instant vs asynchronous communication, nonverbal+verbal vs. verbal only, and speaking only to one another vs. having an audience. But I don’t see why these are *inevitably* so profound and far-reaching.
Yeah—I actually think by far the biggest reason Double Crux hasn’t caught on is because no one has written a post optimized for getting it to catch on (Duncan’s post is instead optimized for making sure that the people that get it actually get the whole thing, and I think it requires you to trust that it’s worth the effort)
Up until last week, I actually thought Double Crux is a pretty straightforward concept (or at least, one that builds directly from ideas that are already common among educated people).
You could summarize Double Crux like this:
I. Ray Attempts to Explain Double Crux
Often times, smart people end up talking past each other, or trying to score social points, or otherwise arguing in a way that doesn’t accomplish anything. This results in people wasting years arguing pointlessly, and moreover, at least half of those people spend years being wrong about stuff they could have talked through and figured out.
Double Crux is a technique to help short-circuiting those pointless and arguments, and instead figure out useful things together. Specifically, it is the first step of having a useful disagreement: figuring out what concrete thing you disagree about that you can potentially just go and check to see if it’s true.
The steps are:
1. Shift into a mindset where you’re in a collaborative truthseeking endeavor, rather than a debate where you’re trying to score points 2. While in that mode, figure out what would actually, seriously make you change your belief (while your partner does the same for themselves). This is your Crux 3. Together, try to find a concrete thing you both disagree on, that would change both your minds depending on whether it was true. (i.e. if you could run an experiment and the world turned out one way, I’d change my mind, and if it turned out the other way, you’d change your mind). This is the Double Crux. (I actually think the phrase “Shared Crux” is a bit clearer) 4. If the Double Crux isn’t something you can easily check in the real world, see if you can find a related feature of the world that’s at least evidence about whether the Crux is true.
This isn’t really especially original. “Make sure you’re not talking past each other, figure out what you’re actually disagreeing about, figure out a way to test it empirically” is something people have been doing since way before CFAR.
In my opinion, the value-add is mostly giving it a name, operationalizing it, and specifically claiming that people should be doing this all the time, whenever a major disagreement happens that’s important to resolve, instead of arguing in circles.
II. But, maybe this is harder?
Last week, a couple people argued with me that this is in fact fairly hard, you can’t really learn how to do it except by watching skilled people do it, and reading a couple-paragraph description of it isn’t nearly enough. It’s more like an artform than an easily learned technique. I’m unsure about that (I have vague plans with those people to talk it over in more detail later).
Right now I’m writing this mostly to provide better background for people who haven’t been following all the discussions of Double Crux lately (most of which have, yes, been on Facebook. This post is my attempt to change that)
Right now I’m writing this mostly to provide better background for people who haven’t been following all the discussions of Double Crux lately (most of which have, yes, been on Facebook. This post is my attempt to change that)
I certainly appreciate that.
Let me offer a couple of suggestions, that would, at least, help you explain it to me (and perhaps to others? but that’s as may be):
1. Extensions, not intensions.
I’d really like an actual, live (by which, of course, I mean “online”) example of people using this Double Crux business. Like, actually for-real (and not, say, as a demonstration example, arranged for the purpose of showing off the technique).
Is it even doable online? In a forum / blog context? Perhaps at least in chat? Or is it only something that can be done in person? (If so, that makes it of limited use, at least, to the LW audience—useful though it may be to your local, meatspace, community of rationalists!)
2. Applicability.
Someone recently said to me, of Double Crux (I am quoting from memory): “it seems like a decent attempt to solve a problem that almost never happens”. He meant, I think, something like—most of the time, when people (even rationalists) disagree or argue or otherwise fail to see entirely eye to eye on a matter, it is not in a way that would be solved by identifying some key fact about which they differ.
How would you characterize the class of situations in which Double Crux is applicable? How often do you think such situations come up (in comparison to, say, the category of “all disagreements that occur between people”, or even “all disagreements that occur between rationalists”)? Could you, again, point to several (at least three) real, live examples—publicly perusable by your readers here—of disagreements which Double Crux would cut through?
This second point seems to me to be of the highest importance, especially because you say:
In my opinion, the value-add is mostly giving it a name, operationalizing it, and specifically claiming that people should be doing this all the time, whenever a major disagreement happens that’s important to resolve, instead of arguing in circles.
But in fact, Double Crux’s applicability is very limited in scope; or else I really understand nothing about it. So—explain! :)
I’d be willing to do an asynchronous attempt to double crux about whether the problems that motivated the creation of double crux ever happen. We could then post the results as a public example. My understanding is that the person who said that to you misunderstands the problems that are trying to be solved, because they definitely happen all the time in my experience.
Well, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing…
But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?
Yeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.)
The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. “Ohhhhh, wait, hang on—I think I would agree with what you’re saying if I thought that we couldn’t expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get.”
Another is the issue of burden of proof, which I think I’ve read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I’m remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up “declaring victory” prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it’s always been done, and it’s on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it’s on the backwards troglodytes to prove that spanking is net beneficial, the debate won’t ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they’re already expecting of their own beliefs, and can make that known to the other person.
Some other situations, off the top of my head:
You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you’re really uncomfortable and criticize my road rage, and we’re trying to converge on whether it was actually right that I did what I did. Double Crux seems like a good tool for each of us to get to the bottom of our implicit models and make them available to the other person.
You and I are living together in a house, and we have some sort of agreement about the cleanliness of the common spaces, and we keep clashing over it such that I feel judged and you feel defected on, and to some extent (given that each of us has our own frame) we’re both right. Double Crux (or at least the generators that caused Double Crux to be invented) seems like a useful tool for helping us keep the argument on track à la “under what circumstances would you agree my mess was permissible/under what circumstances would I agree I’d been too cavalier” (such that we can feel confident things will be different in the future because our models now converge), versus having it spiral off into “you’re a dick/you’re a slob,” which isn’t crucial to our disagreement in the same way.
You and I are trying to decide how to divide a chunk of value (e.g. $10000 we were given in a grant, or our work hours over the next month) and we strongly disagree to the point that there’s sort of a zero-sum game (e.g. I need all of my hours and some of yours to accomplish my plan, and the same is true in reverse for your plan). We could resolve this through rank, or we could resolve it in a social pressure game, or we could just fight and sink everything, but through Double Crux or something like it it seems likely that we can come closer to understanding why the other person is so confident that their use of resources is better, and once we both have identical overlapping models of both sides it seems likely that we can act strategically in a coordinated fashion to choose the best tradeoff.
Hello, I’m the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.
On questions like “what is the longitude of Tokyo” or “who starred in the original Star Wars,” two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special “rationality technique” or something CFAR can take credit for inventing.
On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn’t actually connected to the real motivations behind their disagreement (“if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good”), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.
Double Crux does seem applicable to questions where the answer can’t simply be looked up, where the disagreement is strictly confined to the empirical level and doesn’t touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.
I note there’s something in here that I’m reading as a pseudofallacy—it’s the same reason why Mythbusters is terrible, and it goes like “I can only think of these three outcomes, and therefore those are the most likely outcomes.”
This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot.
That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data.
But there’s a particular sort of … audacity? … in taking one’s own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence “this thing that a lot of smart people say is useful just isn’t.”
In your shoes, I’d say something like what I said in my Focusing post, which is “this thing that is useful for a lot of people isn’t useful for me or the people around me.” That seems more solidly justified and epistemically sound, and enriches an onlooker’s understanding of the situation rather than creating crosswise narratives.
In particular, as I tried to do with Focusing, I’d make a genuine attempt to learn Double Crux (from the people who know what they’re talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven’t done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who’s skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans.
(By the way, it’s completely fine to have not done A + B + C. People have higher priorities. But I personally think that in a rationalist community like Less Wrong, we have a responsibility to not claim things are false or useless or stupid until we’ve actually attempted to falsify them, not just scanned through our own experiences for confirming evidence. If I were in your shoes and I didn’t think Double Crux was useful and I also didn’t intend to do A + B + C, I’d caveat my suspicions of its relative uselessness heavily by pointing out that I was using Stereotypes rather than Rigor, and I want people on Less Wrong to call for and socially reinforce that sort of standard.)
Will probably add that to my list of posts to write this month.
Also, am willing to do the thing that’s been suggested over and over in this thread, and do a Double Crux with you on the usefulness/uselessness of Double Crux, including doing the motions unilaterally while you do whatever you feel like. I could use more practice with Double Cruxing in a not-fully-cooperative environment, since it seems like a plurality of the important debates happen with people who aren’t willing to enter the Double Crux frame anyway.
You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.
Yes, I don’t actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I’d be very happy to see some real rigor, but I’m not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was “supposed to be doing better science later,” meaning better than self-reports and personal impressions. How much later is later?)
I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I’m going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don’t gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.
I have not in fact attended a CFAR workshop and don’t intend to, for reasons that might get me in trouble with the “Sunshine Regiment” if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I’m happy to try it with you if you’d like.
I disagree with your claim that I “reflexively characterized all skepticism as hostile.” I have reread my own comment and I do not think that’s a fair or accurate synopsis.
I believe you are overstating your claim that “there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy” and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don’t think you could actually find thousands of people in the intersection of “smart” and “endorses homeopathy”).
My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they’re being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc.
In short, I wasn’t asking you to be less skeptical, I was asking you to add one more person to your list of people you’re skeptical of—yourself.
I’m attempting to point out that your claim “Double Crux seems like an attempt to solve a problem that almost never happens” seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren’t necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying “I am experiencing this-or-that physiological distress which requires some form of treatment” or “I am having this-or-that medical problem which is lowering my quality of life” without reference to their thoughts on what would fix it. It does not seem that you are updating away from “the problems that Double Crux purports to solve are rare” and toward “those problems are rare in my experience but reliably common for large numbers of people.”
I’m attempting to point out that your statement “I can see one of three things happening” was made in such a way as to imply that there are no other likely things that might happen, and that you’re considering your ability to generate hypotheses or scenarios or predictions to be likely sufficient and near-complete. It’s like when Myth Busters say “Well, we failed to recreate claim X, and therefore claim X is impossible!” That whole paragraph was setting up strawmen and false dichotomies and ignoring giant swaths of possibility.
I didn’t feel like you really addressed any of the thrust of my previous reply, which was something like “If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
It does not look, based on your comments thus far, like you’re sincerely asking that question. Again, that’s fine—it could simply be that it’s not worth your time. Or it could be that you’re asking that question and I just haven’t noticed yet, and that’s fine because it’s in no way your job to appease some rando on the internet, and my endorsement is not your goal.
But the issue I have, at least, has nothing to do with your opinion on Double Crux. It has to do with the public impression you’re leaving, of how you’re forming and informing it. You’re laying claim to explicitly prosocial behavior on the basis of continuing skepticism, and I simply don’t believe you’re living up to the ideals you think you are. I think Less Wrong has (or ought have) a higher standard than the one you’re visibly meeting. The difference between solving the Emperor’s Clothes problem and just being a contrarian is evidence and sound argument.
Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.
“If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.
I suspect I’m already being more confrontational than you’d prefer, and I don’t want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR’s rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?
Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!
That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).
This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.
Hmmm. Maybe there’s something in here about the difference between “Double-Crux-like” and “formal Double Crux”? On reflection, after you said you’re more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the “spirit” of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it’s the thing that generated Double Crux and not something that’s owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.
General request to all commenters: when editing a post to change wording or conrent, please retain the original wording / content, if existing replies to your comment reference it or depend on it in any way. Doing otherwise destroys the coherence of comment threads, and makes them less useful to later readers.
Re: the edited comment: it baffles me that you perceive that sentence as not only rude, but so rude that it could only be intentional—given that I chose my words carefully, to avoid explicit abuse or impoliteness! How could I have phrased my comment instead, in your opinion, that would’ve upgraded it at least to the level of “unintentionally rude” (“actually polite” is probably too much to hope for), without losing the meaning?
I am dismayed by the discourse norms that such comments imply. :(
I am surprised at 2, and want to retract my comment and make this whole subthread not able to hurt me any more. I’m feeling a lot of social disapproval at my having posted the comment, and my update from it is to just not make comments like that, which I think is a good outcome for your preferences about discourse norms. I can’t stand social disapproval like this, and I feel an urgent need to change however will make it go away the fastest—on most sites, that’s “delete my comment, never post another one like it”.
Though actually, I have 4 points now. But I still acutely feel your disapproval of my having expressed disapproval at you, and want to just take it back and let you talk how you want.
(meta: it’s quite scary for me to try to be honest about this. I feel urge to reply with my actual feelings in the interest of truth seeking, but normally would just be silent.)
Upvoted. I regret that my comments had this effect on you (though do not regret making them). I hope that you will continue to comment no less earnestly than you’ve done so far, and encourage you to do so.
which I think is a good outcome for your preferences about discourse norms
My discourse norms are honesty, integrity, and truth.
I regret that my comments had this effect on you (though do not regret making them).
I like this. My approval drives would lead to a chilling effect on truth-seeking if everyone tried to white-box optimize them when having conversations, and I don’t endorse that; I’d rather people hurt me a bit than fail at truth-seeking. I wish I had a better way to defend myself from the hurt of social disapproval, though; eg, disowning a comment.
My discourse norms are honesty, integrity, and truth.
Strongly agree on #1 (with obvious exceptions if the original wording reveals trade secrets, libels people likely to bring legal action, etc; but in thoses cases you should still describe what used to be there even if you can’t preserve it).
On #2, I can’t share SA’s bafflement. What isn’t rude about saying that a particular organization is so useless that when, attempting to do its job, it recommends doing a thing, that’s evidence against the value of doing it?
I guess it’s not rude if you know there’s no one around who belongs to, or identifies strongly with, that institution. But that’s not very likely in these parts. Otherwise: what baffles me is how anyone would expect that not to be rude.
(To be clear: “Rude” is not the same thing as “bad” or “wrong”. Sometimes being rude is a good thing. Sometimes it is a necessary evil. I am not claiming that no one should ever be rude.)
You seem to be using “rude” in such a way that the property of rudeness can attach to claims on the basis of their propositional content only. That, to me, is a very strange usage.
It seems to me that either you must think that there’s nothing necessarily wrong with being rude; or, you must think that certain claims simply cannot be made, certain propositions simply cannot be expressed—regardless of their truth value (if they are not trade secrets or so on).
I disagree with the latter, and prefer a word usage that makes the former false (else the word “rude” becomes largely useless.)
It’s too late to accomplish this by this point, but the response I had planned for your CFAR comment (I actually had it planned before lahwren responded), which I didn’t have time to write before going to bed, was something like:
”I had an initial negative reaction and urge to downvote when I saw the CFAR comment, but I quickly noticed that most of that was coming from a place of tribal emotions (i.e. ‘must defend my people!’) which I didn’t endorse. I briefly considered trying to respond in a more careful way that got to the heart of the issue, but it seemed like the “yay CFAR? / boo CFAR?” was basically a distraction. There may be a time/place for it but this isn’t it.
I’d prefer if people didn’t end up having a giant discussion about “is CFAR good/bad?” and instead stuck to discussion of Double Crux as a technique.”
Having said that, in light of your other comment about wanting to see a public Double Crux, “should CFAR be positive or negative evidence of a technique’s validity” is precisely the sort of question that Double Crux is for, and I’d be interested in doing a public DC on it with you if you’re up for it (normally I’d suggest skype but since part of the point is to produce something easy for others to consume, chatlog could be fine)
(that said, I’m fairly busy in the next 30 hours or so. I might be up for it Friday night or over the weekend though)
(Edit: it looks like some other people also offered something like this, I don’t think it’s especially important I be involved, but think it’d probably be valuable in any case)
I agree with you re: the grandparent, and I appreciate the offer re: the Double Crux.
I am, sadly, unlikely to be able to take you up on it; my “commenting on or about an internet forum” time budget is already taken up by this flurry of activity here on LW 2.0.
Instead, I’d just like to reiterate my request / suggestion that you folks find some way to be able to point readers to pre-existing, publicly viewable examples of the technique being used. I think much hinges on that, at this point. Offering, when questioned, to demonstrate Double Crux, by way of trying to debate whether Double Crux is any good, is all very well, but—it simply doesn’t scale!
Doesn’t scale, but seems like it should happen at least once. (tongue sort of but not entirely in cheek). Then you can just link to it the second time.
The problem is that Double Crux is best conducted in ways that aren’t very amenable to publicizing (i.e. a private walk where people feel free-er), so there needs to be some attempts to do a public one at a time when:
- it’s high enough stakes that it matters so you can see people using the technique for real —it’s low enough stakes that it’s okay to publicly share it without you having to worry about “looking good” during the discussion —it’s convenient to record in some way
I agree, which is why I think noticing that there’s an opportunity to do a public one (i.e. now) is something that should be treated as a valuable opportunity that’s worth treating differently than arguing-on-the-internet-qua-arguing-on-the-internet.
(I also think arguing “should ‘created by CFAR’ be positive or negative evidence” is at least slightly less meta-sturbatory than “let’s double crux about double crux”)
Strong agree that it’s both true that “the lack of an example to point to produces justified skepticism” and that “that’s partly unfair because that skepticism and other ‘too busys’ keep feeding into no one taking the time to create said example.”
Yes, I think things can be rude on the basis of their propositional content. (But not only their propositional content.) If I state that you are very unintelligent, and I say it in the presence of you or of your friends, then I am being rude. I can do it in extra-rude ways (“Said is a total fucking moron”) or in less-rude ways (“I have reason to think that Said’s IQ is probably below 90″) but however you slice it it’ll be rude.
(For the avoidance of doubt, of course I do not in fact think any such thing.)
I do, indeed, think there is nothing necessarily wrong with being rude. As I said: Sometimes being rude is a good thing, and sometimes it’s a necessary evil. All else being equal, being rude is usually worse than not being rude, but many other things may outweigh the rudeness.
I don’t see that this makes the word “rude” largely useless, and I’m not sure why it should. If you mean it makes it meaningless then I strongly disagree (I take it to mean something like “predictably likely to make people upset”, though for various reasons that isn’t exactly right). If you mean it makes it unactionable then again I disagree; it just means that acting on the knowledge that something is rude is more complicated than just Not Doing It. (If you want to upset someone, which there may be good reasons for though usually there aren’t, then rudeness is beneficial. If you don’t but other things are higher-priority for you than not upsetting people, then you weigh up the benefits and harms, as always.) If you mean something other than those and the above hasn’t convinced you that my way of using “rude” isn’t useless, then you might want to explain further.
Indeed I meant “meaningless”, or perhaps “encompassing many disparate meanings under the umbrella of one word; attempting to refer to unrelated concepts as if they are the same or closely clustered; failing to cleave reality at the joints”.
I find it quite unnatural to apply the word “rude” as you do, and, to be extra clear, will certainly never mean anything like this when I use the word.
My takeaway here is that if you tell me that something is “rude”, I have not really gained any information about what you think of the thing, nor will I take you to have made any kind of definite claim about the thing, nor even do I know whether you’re attempting to ascribe positive valence to it or negative. (This is, to my mind, an unfortunate consequence of using words in strange ways, though of course you are free to use words as you please.)
I suppose I will have to remember, should you ever describe my comments as “rude” henceforth, to reply with something like—“Ok, now, what actually do you mean by this? ‘Rude’, yes, which means what…?”.
I am confused. (And also, apparently, confusing, which I regret.)
If I say something is rude then you learn that in my opinion it is likely to upset or offend a nontrivial fraction of people who read it. (Context will usually indicate roughly which people I think are likely to be upset or offended.)
How is that no information? How have I made no definite claim?
(It is true that merely from the fact that I call something rude you cannot with certainty tell whether I am being positive about it or negative. The same is true if I call something large, ingenious, conservative, wooden, complex, etc., etc., etc. I don’t see how this is a problem. For the avoidance of doubt, though, most of the time when I call something rude I am being negative about it, even if I think that the rudeness was a necessary evil.)
My use of the word “rude” doesn’t seem to me particularly nonstandard or strange. It’s more or less the same as definition 5a in the OED, which is “Unmannerly, uncivil, impolite; offensively or deliberately discourteous”. (The OED has lots of definitions, because “rude” does in fact have lots of meanings. It can e.g. sometimes mean “unrefined” or “vigorous”.)
Clearly you are dissatisfied with my usage of the word “rude”. Perhaps you might tell me yours; it is still not clear to me either what it is or why it might be better than mine. From what you say above, it seems that you want it used in such a way that “X is rude” strictly implies “X is morally wrong”, but if that’s really so then I’m unable to think of any meaning that does this while coming anywhere near the specificity that “rude” usually has. (At least for those who have moral systems not entirely based around not giving offence, which I am pretty sure includes both of us.)
I admit that I’m puzzled by your comment. What is it that you think I might be hiding, or that I might wish to (plausibly) deny…? I thought I’d made myself reasonably clear, but if some part of my comment’s meaning seems obscure to you, I’d be glad to clarify…
(As a side note, and more generally, I’d like to note my very strong distaste for any community / site discourse norms that required commenters to hold to “prosocial wording” at all times. There is a difference between respectfulness and common decency, on the one hand, and on the other, this sort of stifling tone policing.)
I agree: it doesn’t read at all like an attack hidden behind plausible deniability, it reads like an attack that isn’t hidden at all.
But what’s it for?
Unless you think there are a lot of LW-adjacent people who regard “X comes from CFAR” as evidence against X being useful (my guess is that there are not, though there are probably a fair few who think “X comes from CFAR” is no evidence to speak of that it actually is useful), it’s not doing anything to resolve Raemon’s curiosity about why the technique hasn’t become popular. (I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)
And, if in fact doublecruxing’s CFAR origins are a problem for any reason, it’s not like there’s much anyone can actually do about them.
The immediate impression I get from your remark about CFAR is this: “Said Achmiz really doesn’t like CFAR, and he wants everyone to know it, so much so that he puts anti-CFAR jabs into comments where they add nothing and probably serve only to antagonize people who might otherwise listen more willingly to what he’s saying”. It’s the same feeling I get from the similar jabs some people like to make at one another’s political or (ir)religious positions. I think they (and I am very much including yours here) tend to push discussions in the direction of tribal warfare (are you on Team CFAR-is-Good or Team CFAR-is-Bad?) and make them less productive.
There absolutely should not be any sort of obligation to be “prosocial” here. And if you wrote a post about why you think CFAR does more harm than good, I would read it with interest and probably upvote it. (My main reservation would be that communities like this tend to spend too much time discussing themselves and not enough time discussing actual issues, and this might be heading in the same direction.) But, while I’m not sure I can endorse the specific complaint lahwran made, I very much do endorse a slightly different one: your comment about CFAR was gratuitously rude and largely irrelevant, and what you wrote would have been better without it.
I am concerned about a fairly mild anti-CFAR comment getting this much criticism. I do think “part of the reason I haven’t adopted double crux is that I don’t trust CFAR” is a relevant comment. Even if it wasn’t, I worry that motivated reasoning will cause people to be far more upset about criticism of respected rationalist organizations than they are of other institutions, and for this to lead to a dynamic where people are quiet about their feelings about CFAR for fear of being dogpiled. This seems harmful both as a community norm and to CFAR itself.
To be clear, I am not complaining about SA’s comment because it’s anti-CFAR. I’m pretty skeptical about CFAR myself; I wouldn’t go as far as SA does, but the fact that CFAR recommends something doesn’t seem to me very good evidence for it.
I’m complaining about SA’s comment because it seems to me irrelevant, un-called-for, and likely to annoy or upset some readers (of whom I am not one) with no offsetting benefit to make it worth while.
But I very much hope that no one feels unable to criticize CFAR or MIRI or any other entity for fear of being dogpiled, and (as one alleged dog in the alleged pile) promise that if I see such dogpiling happening to someone for relevant criticism then I will be right there on the barricades defending them.
Here’s a more general comment re: the relevance of my aside—not about this issue in particular, but this general class of things.
I have, quite a few times in the past, had the experience of bringing up something like this, and having the responses of other participants or potential-participants in the discussion be split along lines as follows:
Some people: That was unnecessary! And irrelevant. No one else feels this way, why bring your grudge into this unrelated matter?
Other people: Thank you for saying that. I, too, feel this way, and agree that this is highly relevant, but didn’t want to say anything.
Those in the first category are usually oblivious to the existence and the prevalence of those in the second.
So yes, I think that it is not only absolutely permissible, but indeed mandatory, to insert just such asides into just such discussions. If there’s no uptake—well, then I simply drop the matter. Saying it once, or at least once in a long while, is sufficient; I have no problem changing the subject. But pervasive silence in such cases is how echo chambers form.
I can very well believe that remarks like this get exactly those sorts of comments, but I don’t think the existence of the Other People is good evidence that the remarks are a good idea. All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.
If your opinion is that CFAR is a fraud or a scam or just inept and want to reassure others who hold similar views, then make a post actually about that explaining why you think that. It’ll be far more effective in showing those people that they have allies, it’ll provide a venue for others who agree to explain why (and for those who disagree to explain why, which should also be important if we’re trying to arrive at the truth), and it’ll have some chance of persuading others (which at-most-marginally-relevant jabs will not).
If going to the effort of writing a whole post about a concern is a prerequisite to ever mentioning the concern at all, then I think that’s an entirely unreasonable barrier, and certain to create a chilling effect on discussions of that concern. I oppose such a policy unreservedly.
All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.
I thought that “and the concern in question is relevant to the current discussion” was implied. But consider it now stated outright! Append that, mentally, to what I said in the grandparent. (Certainly, as I made clear in the parallel thread, I think that the CFAR issue is relevant to this discussion.)
Perhaps I wasn’t clear: I don’t think you are, or should be, forbidden to mention your opinions of / attitude to CFAR if you aren’t willing to make a whole post explaining them. That would be crazy.
What I do think (which seems to me much less crazy) is this: 1. If, as you say three comments upthread from here, you feel that you have an obligation to say bad things about CFAR in public so that LW2 doesn’t become a pro-CFAR echo chamber, then what you’ve done here is not a very effective way of doing it, and writing something more substantial would be much more effective. And: 2. Dropping boo-to-CFAR asides into discussions of something else is likely to do more harm than good (even conditional on CFAR being bad in whatever ways you consider it bad; in fact, probably more so if it is) because its most likely effect is to make fans of CFAR defensive, people who dislike CFAR gloaty, and people who frankly don’t care much about CFAR annoyed at having what seem like political rivalries injected into otherwise-interesting discussions.
Of course, what’s ended up happening is that there’s been a ton of discussion and you may end up expending as much effort as if you’d written a whole post about why you are unimpressed by CFAR, but without the actual benefits of having done so. For the avoidance of doubt, that wasn’t my intention, and I doubt it was anyone else’s either, but it’s not exactly a surprising outcome either; gratuitously inflammatory asides tend to have such consequences...
Very enthusiastic +1 to this. I also don’t want to have a policy (that, empirically, I currently have, I guess?) of making people who say things like what you said, end up having to defend their views for hours in replies.
Unless you think there are a lot of LW-adjacent people who regard “X comes from CFAR” as evidence against X being useful
I do think that, in fact. (Caveat: I don’t know about “a lot”; I couldn’t speak to percentages of the user base or anything. Certainly not just me, though.)
If you took my comment as merely a political jab, feel free to ignore it. I am not certainly not interested in discussing CFAR-in-general in this thread (though would be happy to discuss it elsewhere). But that part of my comment was fully intended to be as substantive and on-point as the rest of it.
There absolutely should not be any sort of obligation to be “prosocial” here.
I think that it might be productive for the moderation team to comment on this point in particular. It seems like this might be a genuine difference in expectations between segments of the user base, and between the moderators and some of said segments.
(I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)
My take on “why isn’t Double Crux getting more uptake”:
This ‘Double Crux’ thing seems like a complicated technique/process/something, with:
benefits that are nothing close to manifestly clear from the description
no clear, public examples of anyone using it (much less, successfully)
no endorsements from anyone whose opinion I respect (like Scott Alexander or Eliezer—or perhaps Eliezer did endorse it? but then I guess I wouldn’t ever know about it; such is the downside of using Facebook…)
There does not seem to be any reason why I should pay attention it. That it’s not getting uptake seems to require little explanation; it’s the default outcome that I would expect.
(Also, it comes from CFAR, which is an anti-endorsement. This probably wouldn’t matter if all, or even any, of the above three things were different; but as is, for me, it’s the only thing influencing my inclination to really look deeply into the whole matter, and that influence is in the downward direction…)
[Note from the Sunshine Regiment] A lot has happened in this thread, I’m going to comment at second-to-top level so this gets as seen as possible while keeping its context.
In a nutshell
Yes, there is an obligation to be prosocial here.
There’s a lot of room for debate on what prosocial means and what trades-offs are worth it. This Guide To Comments is a start but insufficient. We welcome input from people as we figure this out.
I’m really torn on the particular comment “Also, it comes from CFAR, which is an anti-endorsement”. I want it to be as cheap as possible to criticize the in-group on Less Wrong, because so many other forces are making it expensive. So let’s be be very clear that
sharing a negative opinion is not in and of itself anti-social.
But as several people have pointed out, this opinion was shared in a way that generated a lot of unnecessary friction. A simple “I think that...” or ”...for me” would have done a great deal to resolve this problem.
The mod team is in private contact with Said over this issue.
Re: “it comes from CFAR, which is an anti-endorsement.”
I find that a large majority of people who have a moderate-to-strong negative opinion of CFAR have either a) never subjected that opinion to falsification or b) not checked in since forming the opinion a long time ago.
Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field.
Most of the updates come in one of the following forms:
Ah, okay, CFAR’s made significant improvements along this axis that I was right to criticize it on.
Ah, okay, CFAR is aware that this attribute that it has isn’t ideal; I thought they were proceeding in ignorance but in fact they’re making a cost-benefit decision and while I might disagree with their weighting I am less concerned that they’re blind or stupid.
Ah, okay, this criticism I had was based on assumptions that are simply false, or on information that is simply inaccurate, and while CFAR maybe deserves some blame for imperfect image management and creating-or-allowing-others-to-create those impressions, the problem I thought existed literally doesn’t exist.
Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions, I’m happy to make double crux motions unilaterally from my end as we do so, and then you’d have at least half of a public instance of double crux. (I won’t insist that you use the frame yourself until you’re at least convinced that it has potential.)
I do note that my mainline prediction for “this doesn’t work or doesn’t happen” is something like “Said claims that it’s not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models.” That seems fair and plausibly correct, but if that’s the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.
I am not opposed to this, per se, but—
I’m afraid I have to object to this. The following aren’t equivalent:
It’s not worth my time an attention to engage with you, right now, in this context and fashion
vs.
It’s not worth my time to re-examine [“repair”? why “repair”? this seems to assume the outcome!] my impression of CFAR
Nor are these equivalent:
I am unwilling to engage with you about this [whether “here and now” or “anywhere, ever”]
vs.
I subject my views on this topic to no falsification of any kind [or do you hold that discussing the matter with you is the only possible way to gain accurate information or insight into your organization’s nature / activities / whatever?]
That said, I am willing to devote some effort to this.
I believe you.
(I do not, however, think that this is as informative as it may seem, for various reasons which may perhaps come up in our discussion.)
Before we get deeper into this topic, may I ask—these interactions wherein you’ve convinced people to “come over to your side”, so to speak—have they taken place in person, or online? If the latter, are any public records of this available? (To be clear, I do not ask this because I doubt what you say about having persuaded people—I really do not.)
I appreciate pretty much everything about your reply up above.
Agreement that there was a false equivalency re: right now vs. ever.
Agreement that my phrasing presupposed an outcome (though that makes sense when you take the context of “the guy talking is the curriculum director at CFAR”). I predict that outcome, optimistically, but in fact the actual target should be and is “investigate” not “repair.”
Unfortunately for the goal of record-keeping and evidence-creation, most of those interactions have taken place in person. I could generate stories about what they’re like, but a better option seems to be “start taking notes now when they happen, and ask permission to make said notes public with reasonable anonymity.”
Thanks for responding 100% positively/exactly as I would hope a LWer would respond. I’d love it if you let me know if I myself am not living up to that standard, as you gently did above.
Thank you for the kind words.
Re: the previous interactions: that no notes from them are available is not the problem, nor would have notes help in any meaningful way. (Plus—and I really hate to be so blunt about this, but—notes can say whatever the note-taker, or even the note-poster-to-a-public-website, wants them to say! I’m not seriously suggesting falsification of anecdotal evidence, and as I say below, this is not really my primary concern here, but from the appearance-of-propriety perspective, having notes is not a great situation.)
No, the reason I asked about whether the cited interactions took place in person is certainly not disbelief or lack of evidence; and it is only in lesser part the desire to examine the interactions and see what I can conclude from them. The real reason is that an interaction in person is tremendously different from an interaction via a web forum (like this one)!!
These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!). The point, in any case, is that viewed in light of these differences, your track record of convincing nay-sayers, while undoubtedly real, should be much less persuasive, even to yourself, than you imply it to be.
It would be very different if you could point us to an online exchange, where you, and a serious and thoughtful interlocutor, took the time to compose comments and replies back and forth—the paradigmatic example of such, around here, being the Yudkowsky–Hanson “AI Foom” debate. (Ah, but how did that one turn out, eh?)
I request this enumeration, if your offer extends to interlopers and not just Duncan.
(The differences I can think of are instant vs asynchronous communication, nonverbal+verbal vs. verbal only, and speaking only to one another vs. having an audience. But I don’t see why these are *inevitably* so profound and far-reaching.
Yeah—I actually think by far the biggest reason Double Crux hasn’t caught on is because no one has written a post optimized for getting it to catch on (Duncan’s post is instead optimized for making sure that the people that get it actually get the whole thing, and I think it requires you to trust that it’s worth the effort)
Up until last week, I actually thought Double Crux is a pretty straightforward concept (or at least, one that builds directly from ideas that are already common among educated people).
You could summarize Double Crux like this:
I. Ray Attempts to Explain Double Crux
Often times, smart people end up talking past each other, or trying to score social points, or otherwise arguing in a way that doesn’t accomplish anything. This results in people wasting years arguing pointlessly, and moreover, at least half of those people spend years being wrong about stuff they could have talked through and figured out.
Double Crux is a technique to help short-circuiting those pointless and arguments, and instead figure out useful things together. Specifically, it is the first step of having a useful disagreement: figuring out what concrete thing you disagree about that you can potentially just go and check to see if it’s true.
The steps are:
1. Shift into a mindset where you’re in a collaborative truthseeking endeavor, rather than a debate where you’re trying to score points
2. While in that mode, figure out what would actually, seriously make you change your belief (while your partner does the same for themselves). This is your Crux
3. Together, try to find a concrete thing you both disagree on, that would change both your minds depending on whether it was true. (i.e. if you could run an experiment and the world turned out one way, I’d change my mind, and if it turned out the other way, you’d change your mind). This is the Double Crux. (I actually think the phrase “Shared Crux” is a bit clearer)
4. If the Double Crux isn’t something you can easily check in the real world, see if you can find a related feature of the world that’s at least evidence about whether the Crux is true.
This isn’t really especially original. “Make sure you’re not talking past each other, figure out what you’re actually disagreeing about, figure out a way to test it empirically” is something people have been doing since way before CFAR.
In my opinion, the value-add is mostly giving it a name, operationalizing it, and specifically claiming that people should be doing this all the time, whenever a major disagreement happens that’s important to resolve, instead of arguing in circles.
II. But, maybe this is harder?
Last week, a couple people argued with me that this is in fact fairly hard, you can’t really learn how to do it except by watching skilled people do it, and reading a couple-paragraph description of it isn’t nearly enough. It’s more like an artform than an easily learned technique. I’m unsure about that (I have vague plans with those people to talk it over in more detail later).
Right now I’m writing this mostly to provide better background for people who haven’t been following all the discussions of Double Crux lately (most of which have, yes, been on Facebook. This post is my attempt to change that)
I certainly appreciate that.
Let me offer a couple of suggestions, that would, at least, help you explain it to me (and perhaps to others? but that’s as may be):
1. Extensions, not intensions.
I’d really like an actual, live (by which, of course, I mean “online”) example of people using this Double Crux business. Like, actually for-real (and not, say, as a demonstration example, arranged for the purpose of showing off the technique).
Is it even doable online? In a forum / blog context? Perhaps at least in chat? Or is it only something that can be done in person? (If so, that makes it of limited use, at least, to the LW audience—useful though it may be to your local, meatspace, community of rationalists!)
2. Applicability.
Someone recently said to me, of Double Crux (I am quoting from memory): “it seems like a decent attempt to solve a problem that almost never happens”. He meant, I think, something like—most of the time, when people (even rationalists) disagree or argue or otherwise fail to see entirely eye to eye on a matter, it is not in a way that would be solved by identifying some key fact about which they differ.
How would you characterize the class of situations in which Double Crux is applicable? How often do you think such situations come up (in comparison to, say, the category of “all disagreements that occur between people”, or even “all disagreements that occur between rationalists”)? Could you, again, point to several (at least three) real, live examples—publicly perusable by your readers here—of disagreements which Double Crux would cut through?
This second point seems to me to be of the highest importance, especially because you say:
But in fact, Double Crux’s applicability is very limited in scope; or else I really understand nothing about it. So—explain! :)
I’d be willing to do an asynchronous attempt to double crux about whether the problems that motivated the creation of double crux ever happen. We could then post the results as a public example. My understanding is that the person who said that to you misunderstands the problems that are trying to be solved, because they definitely happen all the time in my experience.
Well, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing…
But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?
Yeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.)
The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. “Ohhhhh, wait, hang on—I think I would agree with what you’re saying if I thought that we couldn’t expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get.”
Another is the issue of burden of proof, which I think I’ve read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I’m remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up “declaring victory” prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it’s always been done, and it’s on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it’s on the backwards troglodytes to prove that spanking is net beneficial, the debate won’t ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they’re already expecting of their own beliefs, and can make that known to the other person.
Some other situations, off the top of my head:
You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you’re really uncomfortable and criticize my road rage, and we’re trying to converge on whether it was actually right that I did what I did. Double Crux seems like a good tool for each of us to get to the bottom of our implicit models and make them available to the other person.
You and I are living together in a house, and we have some sort of agreement about the cleanliness of the common spaces, and we keep clashing over it such that I feel judged and you feel defected on, and to some extent (given that each of us has our own frame) we’re both right. Double Crux (or at least the generators that caused Double Crux to be invented) seems like a useful tool for helping us keep the argument on track à la “under what circumstances would you agree my mess was permissible/under what circumstances would I agree I’d been too cavalier” (such that we can feel confident things will be different in the future because our models now converge), versus having it spiral off into “you’re a dick/you’re a slob,” which isn’t crucial to our disagreement in the same way.
You and I are trying to decide how to divide a chunk of value (e.g. $10000 we were given in a grant, or our work hours over the next month) and we strongly disagree to the point that there’s sort of a zero-sum game (e.g. I need all of my hours and some of yours to accomplish my plan, and the same is true in reverse for your plan). We could resolve this through rank, or we could resolve it in a social pressure game, or we could just fight and sink everything, but through Double Crux or something like it it seems likely that we can come closer to understanding why the other person is so confident that their use of resources is better, and once we both have identical overlapping models of both sides it seems likely that we can act strategically in a coordinated fashion to choose the best tradeoff.
Hello, I’m the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful.
On questions like “what is the longitude of Tokyo” or “who starred in the original Star Wars,” two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special “rationality technique” or something CFAR can take credit for inventing.
On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn’t actually connected to the real motivations behind their disagreement (“if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good”), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux.
Double Crux does seem applicable to questions where the answer can’t simply be looked up, where the disagreement is strictly confined to the empirical level and doesn’t touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.
I note there’s something in here that I’m reading as a pseudofallacy—it’s the same reason why Mythbusters is terrible, and it goes like “I can only think of these three outcomes, and therefore those are the most likely outcomes.”
This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot.
That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data.
But there’s a particular sort of … audacity? … in taking one’s own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence “this thing that a lot of smart people say is useful just isn’t.”
In your shoes, I’d say something like what I said in my Focusing post, which is “this thing that is useful for a lot of people isn’t useful for me or the people around me.” That seems more solidly justified and epistemically sound, and enriches an onlooker’s understanding of the situation rather than creating crosswise narratives.
In particular, as I tried to do with Focusing, I’d make a genuine attempt to learn Double Crux (from the people who know what they’re talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven’t done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who’s skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans.
(By the way, it’s completely fine to have not done A + B + C. People have higher priorities. But I personally think that in a rationalist community like Less Wrong, we have a responsibility to not claim things are false or useless or stupid until we’ve actually attempted to falsify them, not just scanned through our own experiences for confirming evidence. If I were in your shoes and I didn’t think Double Crux was useful and I also didn’t intend to do A + B + C, I’d caveat my suspicions of its relative uselessness heavily by pointing out that I was using Stereotypes rather than Rigor, and I want people on Less Wrong to call for and socially reinforce that sort of standard.)
Will probably add that to my list of posts to write this month.
Also, am willing to do the thing that’s been suggested over and over in this thread, and do a Double Crux with you on the usefulness/uselessness of Double Crux, including doing the motions unilaterally while you do whatever you feel like. I could use more practice with Double Cruxing in a not-fully-cooperative environment, since it seems like a plurality of the important debates happen with people who aren’t willing to enter the Double Crux frame anyway.
You accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere.
Yes, I don’t actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I’d be very happy to see some real rigor, but I’m not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said CFAR was “supposed to be doing better science later,” meaning better than self-reports and personal impressions. How much later is later?)
I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I’m going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don’t gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling.
I have not in fact attended a CFAR workshop and don’t intend to, for reasons that might get me in trouble with the “Sunshine Regiment” if I were to explain, but I have read the posts explaining Double Crux and have even found it useful once or twice. I’m happy to try it with you if you’d like.
I disagree with your claim that I “reflexively characterized all skepticism as hostile.” I have reread my own comment and I do not think that’s a fair or accurate synopsis.
I believe you are overstating your claim that “there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy” and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don’t think you could actually find thousands of people in the intersection of “smart” and “endorses homeopathy”).
My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they’re being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc.
In short, I wasn’t asking you to be less skeptical, I was asking you to add one more person to your list of people you’re skeptical of—yourself.
I’m attempting to point out that your claim “Double Crux seems like an attempt to solve a problem that almost never happens” seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren’t necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying “I am experiencing this-or-that physiological distress which requires some form of treatment” or “I am having this-or-that medical problem which is lowering my quality of life” without reference to their thoughts on what would fix it. It does not seem that you are updating away from “the problems that Double Crux purports to solve are rare” and toward “those problems are rare in my experience but reliably common for large numbers of people.”
I’m attempting to point out that your statement “I can see one of three things happening” was made in such a way as to imply that there are no other likely things that might happen, and that you’re considering your ability to generate hypotheses or scenarios or predictions to be likely sufficient and near-complete. It’s like when Myth Busters say “Well, we failed to recreate claim X, and therefore claim X is impossible!” That whole paragraph was setting up strawmen and false dichotomies and ignoring giant swaths of possibility.
I didn’t feel like you really addressed any of the thrust of my previous reply, which was something like “If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?”
It does not look, based on your comments thus far, like you’re sincerely asking that question. Again, that’s fine—it could simply be that it’s not worth your time. Or it could be that you’re asking that question and I just haven’t noticed yet, and that’s fine because it’s in no way your job to appease some rando on the internet, and my endorsement is not your goal.
But the issue I have, at least, has nothing to do with your opinion on Double Crux. It has to do with the public impression you’re leaving, of how you’re forming and informing it. You’re laying claim to explicitly prosocial behavior on the basis of continuing skepticism, and I simply don’t believe you’re living up to the ideals you think you are. I think Less Wrong has (or ought have) a higher standard than the one you’re visibly meeting. The difference between solving the Emperor’s Clothes problem and just being a contrarian is evidence and sound argument.
Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.
Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.
What makes you think describing why you personally won’t go to a workshop would get you in trouble?
I suspect I’m already being more confrontational than you’d prefer, and I don’t want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR’s rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?
Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!
That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).
This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.
Hmmm. Maybe there’s something in here about the difference between “Double-Crux-like” and “formal Double Crux”? On reflection, after you said you’re more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the “spirit” of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it’s the thing that generated Double Crux and not something that’s owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.
this seems like intentionally rude wording to me.
(edited—this is all I ever meant.)
Replying to your edit:
General request to all commenters: when editing a post to change wording or conrent, please retain the original wording / content, if existing replies to your comment reference it or depend on it in any way. Doing otherwise destroys the coherence of comment threads, and makes them less useful to later readers.
Re: the edited comment: it baffles me that you perceive that sentence as not only rude, but so rude that it could only be intentional—given that I chose my words carefully, to avoid explicit abuse or impoliteness! How could I have phrased my comment instead, in your opinion, that would’ve upgraded it at least to the level of “unintentionally rude” (“actually polite” is probably too much to hope for), without losing the meaning?
I am dismayed by the discourse norms that such comments imply. :(
I am surprised at 2, and want to retract my comment and make this whole subthread not able to hurt me any more. I’m feeling a lot of social disapproval at my having posted the comment, and my update from it is to just not make comments like that, which I think is a good outcome for your preferences about discourse norms. I can’t stand social disapproval like this, and I feel an urgent need to change however will make it go away the fastest—on most sites, that’s “delete my comment, never post another one like it”.
Though actually, I have 4 points now. But I still acutely feel your disapproval of my having expressed disapproval at you, and want to just take it back and let you talk how you want.
(meta: it’s quite scary for me to try to be honest about this. I feel urge to reply with my actual feelings in the interest of truth seeking, but normally would just be silent.)
Upvoted. I regret that my comments had this effect on you (though do not regret making them). I hope that you will continue to comment no less earnestly than you’ve done so far, and encourage you to do so.
My discourse norms are honesty, integrity, and truth.
I like this. My approval drives would lead to a chilling effect on truth-seeking if everyone tried to white-box optimize them when having conversations, and I don’t endorse that; I’d rather people hurt me a bit than fail at truth-seeking. I wish I had a better way to defend myself from the hurt of social disapproval, though; eg, disowning a comment.
I endorse those.
Strongly agree on #1 (with obvious exceptions if the original wording reveals trade secrets, libels people likely to bring legal action, etc; but in thoses cases you should still describe what used to be there even if you can’t preserve it).
On #2, I can’t share SA’s bafflement. What isn’t rude about saying that a particular organization is so useless that when, attempting to do its job, it recommends doing a thing, that’s evidence against the value of doing it?
I guess it’s not rude if you know there’s no one around who belongs to, or identifies strongly with, that institution. But that’s not very likely in these parts. Otherwise: what baffles me is how anyone would expect that not to be rude.
(To be clear: “Rude” is not the same thing as “bad” or “wrong”. Sometimes being rude is a good thing. Sometimes it is a necessary evil. I am not claiming that no one should ever be rude.)
You seem to be using “rude” in such a way that the property of rudeness can attach to claims on the basis of their propositional content only. That, to me, is a very strange usage.
It seems to me that either you must think that there’s nothing necessarily wrong with being rude; or, you must think that certain claims simply cannot be made, certain propositions simply cannot be expressed—regardless of their truth value (if they are not trade secrets or so on).
I disagree with the latter, and prefer a word usage that makes the former false (else the word “rude” becomes largely useless.)
It’s too late to accomplish this by this point, but the response I had planned for your CFAR comment (I actually had it planned before lahwren responded), which I didn’t have time to write before going to bed, was something like:
”I had an initial negative reaction and urge to downvote when I saw the CFAR comment, but I quickly noticed that most of that was coming from a place of tribal emotions (i.e. ‘must defend my people!’) which I didn’t endorse. I briefly considered trying to respond in a more careful way that got to the heart of the issue, but it seemed like the “yay CFAR? / boo CFAR?” was basically a distraction. There may be a time/place for it but this isn’t it.
I’d prefer if people didn’t end up having a giant discussion about “is CFAR good/bad?” and instead stuck to discussion of Double Crux as a technique.”
Having said that, in light of your other comment about wanting to see a public Double Crux, “should CFAR be positive or negative evidence of a technique’s validity” is precisely the sort of question that Double Crux is for, and I’d be interested in doing a public DC on it with you if you’re up for it (normally I’d suggest skype but since part of the point is to produce something easy for others to consume, chatlog could be fine)
(that said, I’m fairly busy in the next 30 hours or so. I might be up for it Friday night or over the weekend though)
(Edit: it looks like some other people also offered something like this, I don’t think it’s especially important I be involved, but think it’d probably be valuable in any case)
I agree with you re: the grandparent, and I appreciate the offer re: the Double Crux.
I am, sadly, unlikely to be able to take you up on it; my “commenting on or about an internet forum” time budget is already taken up by this flurry of activity here on LW 2.0.
Instead, I’d just like to reiterate my request / suggestion that you folks find some way to be able to point readers to pre-existing, publicly viewable examples of the technique being used. I think much hinges on that, at this point. Offering, when questioned, to demonstrate Double Crux, by way of trying to debate whether Double Crux is any good, is all very well, but—it simply doesn’t scale!
Doesn’t scale, but seems like it should happen at least once. (tongue sort of but not entirely in cheek). Then you can just link to it the second time.
The problem is that Double Crux is best conducted in ways that aren’t very amenable to publicizing (i.e. a private walk where people feel free-er), so there needs to be some attempts to do a public one at a time when:
- it’s high enough stakes that it matters so you can see people using the technique for real
—it’s low enough stakes that it’s okay to publicly share it without you having to worry about “looking good” during the discussion
—it’s convenient to record in some way
Well, as I say elsewhere in these comments—that does make it of rather limited utility to much of the LW readership!
I agree, which is why I think noticing that there’s an opportunity to do a public one (i.e. now) is something that should be treated as a valuable opportunity that’s worth treating differently than arguing-on-the-internet-qua-arguing-on-the-internet.
(I also think arguing “should ‘created by CFAR’ be positive or negative evidence” is at least slightly less meta-sturbatory than “let’s double crux about double crux”)
Strong agree that it’s both true that “the lack of an example to point to produces justified skepticism” and that “that’s partly unfair because that skepticism and other ‘too busys’ keep feeding into no one taking the time to create said example.”
Yes, I think things can be rude on the basis of their propositional content. (But not only their propositional content.) If I state that you are very unintelligent, and I say it in the presence of you or of your friends, then I am being rude. I can do it in extra-rude ways (“Said is a total fucking moron”) or in less-rude ways (“I have reason to think that Said’s IQ is probably below 90″) but however you slice it it’ll be rude.
(For the avoidance of doubt, of course I do not in fact think any such thing.)
I do, indeed, think there is nothing necessarily wrong with being rude. As I said: Sometimes being rude is a good thing, and sometimes it’s a necessary evil. All else being equal, being rude is usually worse than not being rude, but many other things may outweigh the rudeness.
I don’t see that this makes the word “rude” largely useless, and I’m not sure why it should. If you mean it makes it meaningless then I strongly disagree (I take it to mean something like “predictably likely to make people upset”, though for various reasons that isn’t exactly right). If you mean it makes it unactionable then again I disagree; it just means that acting on the knowledge that something is rude is more complicated than just Not Doing It. (If you want to upset someone, which there may be good reasons for though usually there aren’t, then rudeness is beneficial. If you don’t but other things are higher-priority for you than not upsetting people, then you weigh up the benefits and harms, as always.) If you mean something other than those and the above hasn’t convinced you that my way of using “rude” isn’t useless, then you might want to explain further.
Indeed I meant “meaningless”, or perhaps “encompassing many disparate meanings under the umbrella of one word; attempting to refer to unrelated concepts as if they are the same or closely clustered; failing to cleave reality at the joints”.
I find it quite unnatural to apply the word “rude” as you do, and, to be extra clear, will certainly never mean anything like this when I use the word.
My takeaway here is that if you tell me that something is “rude”, I have not really gained any information about what you think of the thing, nor will I take you to have made any kind of definite claim about the thing, nor even do I know whether you’re attempting to ascribe positive valence to it or negative. (This is, to my mind, an unfortunate consequence of using words in strange ways, though of course you are free to use words as you please.)
I suppose I will have to remember, should you ever describe my comments as “rude” henceforth, to reply with something like—“Ok, now, what actually do you mean by this? ‘Rude’, yes, which means what…?”.
I am confused. (And also, apparently, confusing, which I regret.)
If I say something is rude then you learn that in my opinion it is likely to upset or offend a nontrivial fraction of people who read it. (Context will usually indicate roughly which people I think are likely to be upset or offended.)
How is that no information? How have I made no definite claim?
(It is true that merely from the fact that I call something rude you cannot with certainty tell whether I am being positive about it or negative. The same is true if I call something large, ingenious, conservative, wooden, complex, etc., etc., etc. I don’t see how this is a problem. For the avoidance of doubt, though, most of the time when I call something rude I am being negative about it, even if I think that the rudeness was a necessary evil.)
My use of the word “rude” doesn’t seem to me particularly nonstandard or strange. It’s more or less the same as definition 5a in the OED, which is “Unmannerly, uncivil, impolite; offensively or deliberately discourteous”. (The OED has lots of definitions, because “rude” does in fact have lots of meanings. It can e.g. sometimes mean “unrefined” or “vigorous”.)
Clearly you are dissatisfied with my usage of the word “rude”. Perhaps you might tell me yours; it is still not clear to me either what it is or why it might be better than mine. From what you say above, it seems that you want it used in such a way that “X is rude” strictly implies “X is morally wrong”, but if that’s really so then I’m unable to think of any meaning that does this while coming anywhere near the specificity that “rude” usually has. (At least for those who have moral systems not entirely based around not giving offence, which I am pretty sure includes both of us.)
I admit that I’m puzzled by your comment. What is it that you think I might be hiding, or that I might wish to (plausibly) deny…? I thought I’d made myself reasonably clear, but if some part of my comment’s meaning seems obscure to you, I’d be glad to clarify…
(As a side note, and more generally, I’d like to note my very strong distaste for any community / site discourse norms that required commenters to hold to “prosocial wording” at all times. There is a difference between respectfulness and common decency, on the one hand, and on the other, this sort of stifling tone policing.)
I agree: it doesn’t read at all like an attack hidden behind plausible deniability, it reads like an attack that isn’t hidden at all.
But what’s it for?
Unless you think there are a lot of LW-adjacent people who regard “X comes from CFAR” as evidence against X being useful (my guess is that there are not, though there are probably a fair few who think “X comes from CFAR” is no evidence to speak of that it actually is useful), it’s not doing anything to resolve Raemon’s curiosity about why the technique hasn’t become popular. (I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)
And, if in fact doublecruxing’s CFAR origins are a problem for any reason, it’s not like there’s much anyone can actually do about them.
The immediate impression I get from your remark about CFAR is this: “Said Achmiz really doesn’t like CFAR, and he wants everyone to know it, so much so that he puts anti-CFAR jabs into comments where they add nothing and probably serve only to antagonize people who might otherwise listen more willingly to what he’s saying”. It’s the same feeling I get from the similar jabs some people like to make at one another’s political or (ir)religious positions. I think they (and I am very much including yours here) tend to push discussions in the direction of tribal warfare (are you on Team CFAR-is-Good or Team CFAR-is-Bad?) and make them less productive.
There absolutely should not be any sort of obligation to be “prosocial” here. And if you wrote a post about why you think CFAR does more harm than good, I would read it with interest and probably upvote it. (My main reservation would be that communities like this tend to spend too much time discussing themselves and not enough time discussing actual issues, and this might be heading in the same direction.) But, while I’m not sure I can endorse the specific complaint lahwran made, I very much do endorse a slightly different one: your comment about CFAR was gratuitously rude and largely irrelevant, and what you wrote would have been better without it.
I am concerned about a fairly mild anti-CFAR comment getting this much criticism. I do think “part of the reason I haven’t adopted double crux is that I don’t trust CFAR” is a relevant comment. Even if it wasn’t, I worry that motivated reasoning will cause people to be far more upset about criticism of respected rationalist organizations than they are of other institutions, and for this to lead to a dynamic where people are quiet about their feelings about CFAR for fear of being dogpiled. This seems harmful both as a community norm and to CFAR itself.
To be clear, I am not complaining about SA’s comment because it’s anti-CFAR. I’m pretty skeptical about CFAR myself; I wouldn’t go as far as SA does, but the fact that CFAR recommends something doesn’t seem to me very good evidence for it.
I’m complaining about SA’s comment because it seems to me irrelevant, un-called-for, and likely to annoy or upset some readers (of whom I am not one) with no offsetting benefit to make it worth while.
But I very much hope that no one feels unable to criticize CFAR or MIRI or any other entity for fear of being dogpiled, and (as one alleged dog in the alleged pile) promise that if I see such dogpiling happening to someone for relevant criticism then I will be right there on the barricades defending them.
I’m actually confused that you think my comment was bad—I was thinking the same thing you ended up saying.
I’m confused too. I don’t think your comment was bad, though as I wrote I’m not sure I could quite endorse the exact complaint it originally made.
Here’s a more general comment re: the relevance of my aside—not about this issue in particular, but this general class of things.
I have, quite a few times in the past, had the experience of bringing up something like this, and having the responses of other participants or potential-participants in the discussion be split along lines as follows:
Some people: That was unnecessary! And irrelevant. No one else feels this way, why bring your grudge into this unrelated matter?
Other people: Thank you for saying that. I, too, feel this way, and agree that this is highly relevant, but didn’t want to say anything.
Those in the first category are usually oblivious to the existence and the prevalence of those in the second.
So yes, I think that it is not only absolutely permissible, but indeed mandatory, to insert just such asides into just such discussions. If there’s no uptake—well, then I simply drop the matter. Saying it once, or at least once in a long while, is sufficient; I have no problem changing the subject. But pervasive silence in such cases is how echo chambers form.
I can very well believe that remarks like this get exactly those sorts of comments, but I don’t think the existence of the Other People is good evidence that the remarks are a good idea. All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.
If your opinion is that CFAR is a fraud or a scam or just inept and want to reassure others who hold similar views, then make a post actually about that explaining why you think that. It’ll be far more effective in showing those people that they have allies, it’ll provide a venue for others who agree to explain why (and for those who disagree to explain why, which should also be important if we’re trying to arrive at the truth), and it’ll have some chance of persuading others (which at-most-marginally-relevant jabs will not).
If going to the effort of writing a whole post about a concern is a prerequisite to ever mentioning the concern at all, then I think that’s an entirely unreasonable barrier, and certain to create a chilling effect on discussions of that concern. I oppose such a policy unreservedly.
I thought that “and the concern in question is relevant to the current discussion” was implied. But consider it now stated outright! Append that, mentally, to what I said in the grandparent. (Certainly, as I made clear in the parallel thread, I think that the CFAR issue is relevant to this discussion.)
Perhaps I wasn’t clear: I don’t think you are, or should be, forbidden to mention your opinions of / attitude to CFAR if you aren’t willing to make a whole post explaining them. That would be crazy.
What I do think (which seems to me much less crazy) is this: 1. If, as you say three comments upthread from here, you feel that you have an obligation to say bad things about CFAR in public so that LW2 doesn’t become a pro-CFAR echo chamber, then what you’ve done here is not a very effective way of doing it, and writing something more substantial would be much more effective. And: 2. Dropping boo-to-CFAR asides into discussions of something else is likely to do more harm than good (even conditional on CFAR being bad in whatever ways you consider it bad; in fact, probably more so if it is) because its most likely effect is to make fans of CFAR defensive, people who dislike CFAR gloaty, and people who frankly don’t care much about CFAR annoyed at having what seem like political rivalries injected into otherwise-interesting discussions.
Of course, what’s ended up happening is that there’s been a ton of discussion and you may end up expending as much effort as if you’d written a whole post about why you are unimpressed by CFAR, but without the actual benefits of having done so. For the avoidance of doubt, that wasn’t my intention, and I doubt it was anyone else’s either, but it’s not exactly a surprising outcome either; gratuitously inflammatory asides tend to have such consequences...
Very enthusiastic +1 to this. I also don’t want to have a policy (that, empirically, I currently have, I guess?) of making people who say things like what you said, end up having to defend their views for hours in replies.
I do think that, in fact. (Caveat: I don’t know about “a lot”; I couldn’t speak to percentages of the user base or anything. Certainly not just me, though.)
If you took my comment as merely a political jab, feel free to ignore it. I am not certainly not interested in discussing CFAR-in-general in this thread (though would be happy to discuss it elsewhere). But that part of my comment was fully intended to be as substantive and on-point as the rest of it.
I think that it might be productive for the moderation team to comment on this point in particular. It seems like this might be a genuine difference in expectations between segments of the user base, and between the moderators and some of said segments.
Thank you.