Moderation Reference
This is a repository of moderation decisions that we expect to make semi-frequently, where it’s somewhat complicated to explain our reasoning but we don’t want that explanation to end up dominating a thread. We’ll be adding to this over time, and/or converting it into a more scalable format once it’s grown larger.
Death by a Thousand Cuts
There’s a phenomenon wherein a commenter responds to a post with a reasonablish looking question or criticism. The poster responds, which doesn’t satisfy the commenter’s concerns. It turns into a sprawling debate.
Most of the time, this is fine – part of the point of LessWrong is to expose your ideas to criticism to make them stronger.
But criticism varies in quality. Three particular dimensions (in descending order of importance) that we think are important are:
Steelmanning – The best criticism engages with the strongest form of an idea. See this post for more detail on why you’d want to do that. Two subsets of this are:
Does it address core points? – Sometimes a critique is pointing at essential cruxes of a person’s argument. Other times it pedantically focuses on minor examples.
Does it put in interpretive effort? – Sometimes, a critic puts in substantial effort to understand the poster’s point (and, if the author worded something confusingly, help them clarify their own thinking). Other times critics generally expect authors to put in all interpretive effort of the conversation. (In some situations, the issue is that the author has in fact written something confusing or wrong. In other situations, it’s the critic who isn’t understanding the point).
Is it kind? – While less crucial than steelmanning, LessWrong is generally a more fun to place to be if people aren’t being blunt or rude to each other. All else being equal, being kind rather than blunt is better.
Any given one of the three spectrums above isn’t necessarily bad. We don’t want a world where all criticism must involve a lot of effort on the part of the critic. But over the years we’ve encountered a few users who frequently focus on criticism that is a) addressing noncentral points, or misunderstanding the author’s core point, while b) being a bit blunt and/or not putting much interpretive effort into the conversation.
The results are comments where any given comment seems basically reasonable, but the aggregate of them makes LessWrong feel like a hostile, nitpicky place. Several people have cited abundance of this type of comment as a reason they left LessWrong.
The phrase “Death by a thousand cuts” hopefully gets across the sort of problem here.
This is somewhat challenging to moderate – if you’re a new user, it might look very heavy-handed to see someone suspended or banned for what looks (in isolation) like a fairly innocuous comment.
But since this pattern of commenting is one of the strongest complaints we’ve gotten about LessWrong, it’s necessary for us to take action on it from time to time. (Generally starting with 2 warnings, followed by a temporary suspension).
(I don’t know if it’s appropriate to discuss these rules here; if not, let me know where to move this comment.)
I have some minor problems with the concept of steelmanning (as described in the linked blog post, and as commonly used among rationalists); but my fundamental problem with its usage here is this:
The linked post makes a reasonable case for steelmanning being a good practice, for one’s own personal epistemic hygeine. What it does not do, however, is make any kind of case that it’s a good idea to require it of one’s interlocutors in a discussion or a debate.
(To be clear, I don’t fault the author of the linked post for this; she does not claim or imply that requiring steelmanning of one’s interlocutors is prudent or healthy, so she has no obligation to defend any such claim. My problem is with the way the linked post, and the concept of steelmanning, is being used here, in the context of these moderation rules.)
I would be interested in seeing a case made for this latter proposition. I do not think that I have ever seen such; and if you asked me now whether making steelmanning a requirement or even an expectation is a good idea, I would—with moderate, though not strong, confidence—reply in the negative. Whoever disagrees—might you say why?
This is an important point, one which was stressed in EY’s description of “Crocker’s Rules” (and apparently missed by many, as mentioned later on): imposing something on oneself—a discipline—may be very useful, but is a very different thing to expecting that thing of others, and the justifications often do not carry over.
I agree with your distaste given my understanding of ‘steelmanning’, which is something like “take a belief or position and imagine really good arguments for that” or “take an argument and make a different, better argument out of it” (i.e. the opposite of strawmanning), primarily because it takes you further away from what the person is saying (or at least poses an unacceptably high risk of that). That being said, the concrete suggestions under the heading of steelmanning, addressing core points and putting in interpretive effort, seem crucially different in that they bring you closer to what somebody is saying. As such, and unlike steelmanning, they seem to me like important parts of how one ought to engage in intellectual discussion.
Yeah, thinking overnight and reading more comments this morning has me updating that I shouldn’t have used the word steelmanning there, and I’ll update it soon (although I’m not 100% sure what the best term here is and not sure there’s a single term that does what I want).
Background:
When discussing this with other mods, I suggested saying that LW should be about collaborative rather than adversarial truthseeking, and other mods noted that there is a time for adversarial truthseeking even on LW and it’d probably be epistemically fraught to try and bake collaboration into the LW discussion DNA.
I ended up writing the three bullet points (address core points, invest interpretive labor, and being kind) as standalone points. It seemed important to distinguish the first two points from the last one. I searched my brain for a term that seemed to encompass the first two terms, generated “Steelman” and then called it a day. But, yeah, steelman has other properties that don’t quite make sense for what I’m trying to point at here.
It sounds like principle of charity is a better match for your intended meaning than steelman.
(not an official mod take, me thinking out loud)
I feel like there was a piece here that charity doesn’t quite address, where I actually _did_ mean something closer (but perhaps not identical to) steelman.
Elsethread, Vlad notes that steelman often ends up replacing someone’s argument with a totally different argument. This part is bad for purposes of communication, since you might end up misunderstanding someone’s position. But I think it’s good for purposes of goal-directed-discussion.
i.e. in my mind, the point of the discussion is to output something useful. If the something useful is different than what the author originally intended, that’s fine. (This is where “collaborative discussion” felt more right to me than most other terms)
Yes, I think the word ‘steelmanning’ is often used to cover some nice similar-ish conversational norms, and find it regrettable that I don’t know a better word off the top of my head. Perhaps it’s time to invent one?
Your criticism of steelmanning is apt. Of “interpretive effort” I have already spoken elsewhere. As for “addressing core points”, however…
The problem with this criterion is simply that an author and a reader may disagree on what the core points are.
Note that I am not simply talking about cases where a reader misunderstands what the author wrote, and mistakes something for an intended “core point” that is no such thing! Such cases are, in a sense, trivial, insofar as clearing up the misunderstanding results in the original piece standing unmodified (excepting, perhaps, any clarifying modifications that are aimed at preventing exactly such misunderstanding from reoccurring in other readers).
Rather, I’m talking about cases where the reader understands what the author is saying, but considers the author’s claimed “core point” to actually be of peripheral significance, and considers a different point (one which the author either did not mention at all, or devoted only scant attention to) to be central. In this case, no misunderstanding, per se, is occurring (unless you call a deep conceptual or empirical error a “misunderstanding”, which I do not—I reserve the latter term for miscommunication).
This sort of thing can come in many forms. I will describe just one of them here.
It sometimes happens that someone—call him Dave—will write a post describing some (purported) phenomenon P—some pattern, some dynamic, some causal structure, etc.—and, to illustrate P, will provide an example E.
Carol, a reader, then comes along and says: “Actually, Dave, I don’t think E is an example of P! Consider the following…” Dave perhaps defends his example, perhaps not, but agrees, in the end, that E is not really a good example of P after all. “But,” says Dave, “that was just an example! Quit nitpicking, Carol—address my core points!”
What is Carol to say to this? Does Dave accuse her fairly? Is it mere nitpicking to attack a mere example?
But if E is the only example of P that Dave had provided, then—E having now been disqualified from that role—Dave is left with no examples of P. And if Dave cannot provide any other (or, I should simply say, any) examples of P, then perhaps P is simply not real? I can hardly thinking of a more “core” point than the question of whether the thing we’re talking about is even a real thing!
As I say, the “attacking an example” form is just one of the ways in which the “disagreement about what the core points are” dynamic can manifest. But it alone has, in my view, been responsible for a great deal of epistemic trouble among rationalists. As I’ve said elsewhere, we—the types of people who frequent such sites as Less Wrong—are very good at inventing abstract patterns, “crystallizing concepts”, constructing systems of classification, and so on. We are so good at it, in fact, that this talent of ours often gets away from us. It is absolutely critical to keep it reined in. (I do not choose the expression lightly! Reins, after all, are used, not to prevent a horse from moving, but to make it move in the direction you want to go—instead of uselessly running wild.) And those reins are made of real-world examples, they are made of extensions (as vs. intensions), they are made of practice. Without it, deceiving ourselves that we’ve gained knowledge, when in fact we’re building sky castles of abstract nonsense, is all too easy.
Perhaps you have other examples of dynamics where what the ‘core points’ are is in dispute, but the Carol and Dave case seems like one where there’s just a sort of miscommunication: Dave thinks ‘whether E is an example of P’ is not a core point, Carol thinks (I presume) ‘whether there exist any examples of P’ is a core point, and it seems likely to me that both of these can be true and agreed upon by both parties. I’d imagine that if Carol’s initial comment were ‘I don’t think E is an example of P, because … Also, I doubt that there are any examples of P at all—could you give another one, or address my misgivings about E?’ or instead ‘Despite thinking that there are many examples of P, I don’t think E is one, because …’ then there wouldn’t be a dispute about whether core points were being addressed.
These can both be true. But they also may not both be true—for instance, in cases where E is representative of a class of pseudo-examples (i.e., scenarios that have the property of seeming to be examples of P but not actually being examples of P). Similarly, ‘whether E is an example of P’ is often indeed a core point in virtue of ‘if E is not an example of P, why did Dave think that it is?’ being a core point; the latter question often goes to the heart of Dave’s view, and his reasons for holding it!
It also happens to empirically be the case that many (perhaps, most?) real-life analogues of Dave do not consider ‘whether there exist any examples of P’ to be a core point of their claims (or, at least, that is the strong impression one gets from the way in which they respond to inquiries about examples).
Finally, ‘does Dave have any actual examples of P’ is a very strong indicator—strong Bayesian evidence, if you want to view it that way—of whether we ought to believe P, or how seriously we ought to take Dave’s claims. (No, “just evaluate Dave’s argument, aside from examples” is not an acceptable response to this!)
Doubting that there are any examples of P is not, so to speak, Carol’s job. The claim is that E is an example of P. The only reason Carol has for thinking that there are examples of P (excepting cases where P is something well-known, of which there are obviously many examples) is that Dave has described E to the reader. Once E is disqualified, Carol is back to having no particular reason to believe that there are any examples of P.
Once E is disqualified, it is (or it ought to be!) implied that supplying other examples of P is now incumbent upon Dave. Carol bears no obligation (either epistemic or rhetorical) to commit to any position on the question of “are there any examples of P”, in order for Dave to be faced with the need to provide replacement examples.
In short, I think that “having made a claim, has Dave in fact provided any actual examples of the claimed thing” is (barring edge cases) always a core point.
It seems to me that there’s likely to be enough cases where there are differences in opinion about whether P is well-known enough that examples aren’t needed, or whether P isn’t well-known but whether the reader upon hearing a definition could think of examples themselves, that it’s useful to have norms whereby we clarify whether or not we doubt that there examples of P.
All of the cases I am thinking of are those where P is a new concept, which the author is defining / describing / “crystallizing” for the first time. As such, it seems unlikely that this sort of edge case would apply.
I do agree that working examples are quite important (and that this is something authors should be encouraged to provide).
The issue I expect to be relevant to your past experience is different takes on which examples are valid.
(My impression is that it is often the case, esp. with discussions relating to internal mental states, that the author provides something that makes total sense to them, and me, but doesn’t make sense to you, and then you continuously ask for better/different examples when it seems like the underlying issue is that for whatever reason, the particular mental phenomena their referencing isn’t relevant to you.
How to resolve this seems like a different question that the rest of this thread is focusing on. But I think by this point a lot of people not-providing-you-in-particular-with-examples is because they don’t expect your criticism of their examples to be that useful.)
Oh, certainly this is a fair point. No argument there! But we can agree, I think, that “you say E is not an example of P, but I maintain that it is” is not at all the same thing as “you’re not addressing the core point”—yes?
(Talking about the rules here is fine.)
To clarify, I’m not saying that Steelmanning is required. Only that a criticism that steelmans is, typically, more valuable than criticism that doesn’t.
The “Death by a Thousand Cuts” thing isn’t meant to be a hard-fast rule, it’s a judgment made in some cases. Most comments (critical or otherwise) have a number of things going on that involve multiple benefits and multiple costs. Some costs (and benefits) can aggregate over time, and might aggregate too much if done a lot in a short period of time. (Imagine a weapon in a video game that overheats, or pollution that isn’t too noticeable unless it crosses a particular density)
It’s hard to have an explicit rule here that isn’t gameable, but broadly, the more costs a set of comments is imposing, the more valuable it needs to be. (This is true both for claims as well as criticism)
(I am saying that, on the margin, it is better for LW and for the general “find useful true things” project to have more people training the skill of steelmanning-as-matter-of-course as they debate. The Steelmanning article advocates it for improving rationalist virtue. LW is about cultivating rationalist virtue. I think if you practice doing it regularly it stops being a weird extra thing you have to do and part of the normal flow of thinking, and that it’s net positive both for you and the people reading your criticism)
I disagree. Steelmanning is nice, but I don’t think it necessarily adds value. I think there is real value in engaging the actual arguments that the person made, in the way that they made them. If LessWrong is going to train rationalists to argue for their points persuasively, I think it’s imperative that we engage with the actual evidence that is presented, and not the idealized version of the evidence that would have convinced us of the conclusions.
Edit: After thinking about it some more, I have realized that steelmanning poses a danger to the listener as well as to the speaker. Namely, given two arguments of equal strength, one which I am able to steelman, and one which I am not, it’s quite possible I will find the argument that I am able to steelman more convincing, even though it has no more evidence behind it than the argument that I am not able to steelman. It seems to me that steelmanning exaggerates our cognitive blindspots, rather than reducing them. Can you show me that steelmanning is not an epistemic hazard?
Yup, see Ozy’s post Against Steelmanning and Eliezer’s fb post agreeing that starts “Be it clear: Steelmanning is not a tool of understanding and communication.”
Hmm. I’m not sure how much of this is difference-in-predicted-best strategy-for-truthseeking, difference in values, or unclarity around the term “Steelman”
I do think there are bad ways to Steelman, and I think there are times where Steelmanning isn’t the appropriate thing to do. But the way you’re using it it sounds like you mean “rationalize reasons something might be true”, as opposed to “think about the strongest version of an argument.”
Doesn’t really seem like what LessWrong should be trying to do, to me. The point here is to figure out useful true things (and to have a culture of people who are good at figuring out true things, but individually and collectively).
It doesn’t matter (as much) whether someone presents a good or bad argument for a thing. What ultimately matters is “is the thing true? Is it important? If it’s not true, is the problem the argument was trying to address important and are there other ways to address it?”
If a claim has some bad logic in it, but then you fix the logic and the claim makes sense, you should believe it, because, well, the improved claim makes sense. (You should continue to not believe the original claim with the broken logic, because it had broken logic)
It sounds like you’re worried about times when you might think you’re doing that but are in fact just deluding yourself. (Which I agree is a bad thing that happens sometimes, but I don’t think Steelmanning makes you any more prone to that than arguing in the first place. I think it’s much more frequent for people to make intellectual mistakes by staying in “attack” mode than by being overly accommodating of people they disagree with)
Yes, I agree with that. However, I think it’s very easy to change the conclusion in the process of changing the inferential steps or the premises. If arguments were presented mathematically, using formal logic, I would have no objection to steelmanning. It would be obvious if the conclusion of an argument had changed in the process of fixing logic errors. However, we discuss in English, not math, and as a result I’m wary of engaging with anything other than the text as it is written. I do not have confidence in my ability to change my interlocutor’s argument while preserving its conclusion.
FWIW, while this isn’t steelmanning, this recent comment of yours seems to be doing the general motion I’m trying to point to here, of which steelmanning is a subset: you point out a flaw in someone’s argument, while acknowledging the underlying problem they’re trying to solve, and then contribute additional possible solutions. Constructive criticism rather than destructive.
(This is not me necessarily endorsing your solution in that comment, since it’s a complicated domain and I haven’t thought about it thoroughly myself, but the comment is structured in a way that helps other people who join the discussion continue to operate in a “help figure out something useful rather than attack each other.”)
It doesn’t seem like steelmanning is particularly useful for communication or critique. It refines ideas into something different, more interesting, drawing attention away from to the original. This makes it useful for collaborative truth seeking, or just for your own thinking based on what you read.
A useful notion of steelmanning needs to be distinguished from charity and rationalization. Charity is looking into the reasons a person believes or says the things they do. The beliefs themselves may be ridiculous and not useful to understand, but the reasons for arriving at them point to a real process in someone’s mind and may clarify the context where the beliefs come up. Ignoring the beliefs as something you won’t accept is different from ignoring the process that created them, and charity is about paying attention to the process. The reasons for holding a belief can be different from arguments given for it, and there is also a question about the reasons for arriving at certain arguments. Pursuing charity leads to identifying errors in thinking. It’s also the right point of view on weaponized words that turn out not to reflect beliefs in the usual sense, but serve a purpose, even without knowledge of the people utilizing the words.
Steelmanning, on the other hand, acts on the beliefs themselves. It brings to attention improved versions of the beliefs, versions that may be more worthy of discussion than the original, non-steelmanned beliefs. So it’s a way of changing the topic to something occasionally more interesting, and in that it’s similar to charity, but it changes the topic in a completely different way.
Rationalization is finding a convincing argument for a predefined position. When the position is incorrect, even slightly, the arguments to choose from are flawed, and the task is to find the most convincing of them. The flaws are mostly about ignoring some evidence and giving too much weight to other evidence, although if the audience is not too discerning, other flaws may allow the argument to become even more convincing.
Steelmanning of a belief discards the problem statement for rationalizing it by changing the belief. Steelmanning of an argument for a predefined belief is almost exactly rationalization. But steelmanning an argument without requiring its conclusion to come out the same may be interesting, even as it changes the topic of discussion.