To clarify, I’m not saying that Steelmanning is required. Only that a criticism that steelmans is, typically, more valuable than criticism that doesn’t.
The “Death by a Thousand Cuts” thing isn’t meant to be a hard-fast rule, it’s a judgment made in some cases. Most comments (critical or otherwise) have a number of things going on that involve multiple benefits and multiple costs. Some costs (and benefits) can aggregate over time, and might aggregate too much if done a lot in a short period of time. (Imagine a weapon in a video game that overheats, or pollution that isn’t too noticeable unless it crosses a particular density)
It’s hard to have an explicit rule here that isn’t gameable, but broadly, the more costs a set of comments is imposing, the more valuable it needs to be. (This is true both for claims as well as criticism)
(I am saying that, on the margin, it is better for LW and for the general “find useful true things” project to have more people training the skill of steelmanning-as-matter-of-course as they debate. The Steelmanning article advocates it for improving rationalist virtue. LW is about cultivating rationalist virtue. I think if you practice doing it regularly it stops being a weird extra thing you have to do and part of the normal flow of thinking, and that it’s net positive both for you and the people reading your criticism)
Only that a criticism that steelmans is, typically, more valuable than criticism that doesn’t.
I disagree. Steelmanning is nice, but I don’t think it necessarily adds value. I think there is real value in engaging the actual arguments that the person made, in the way that they made them. If LessWrong is going to train rationalists to argue for their points persuasively, I think it’s imperative that we engage with the actual evidence that is presented, and not the idealized version of the evidence that would have convinced us of the conclusions.
Edit: After thinking about it some more, I have realized that steelmanning poses a danger to the listener as well as to the speaker. Namely, given two arguments of equal strength, one which I am able to steelman, and one which I am not, it’s quite possible I will find the argument that I am able to steelman more convincing, even though it has no more evidence behind it than the argument that I am not able to steelman. It seems to me that steelmanning exaggerates our cognitive blindspots, rather than reducing them. Can you show me that steelmanning is not an epistemic hazard?
Yup, see Ozy’s post Against Steelmanning and Eliezer’s fb post agreeing that starts “Be it clear: Steelmanning is not a tool of understanding and communication.”
Hmm. I’m not sure how much of this is difference-in-predicted-best strategy-for-truthseeking, difference in values, or unclarity around the term “Steelman”
I do think there are bad ways to Steelman, and I think there are times where Steelmanning isn’t the appropriate thing to do. But the way you’re using it it sounds like you mean “rationalize reasons something might be true”, as opposed to “think about the strongest version of an argument.”
If LessWrong is going to train rationalists to argue for their points persuasively...
Doesn’t really seem like what LessWrong should be trying to do, to me. The point here is to figure out useful true things (and to have a culture of people who are good at figuring out true things, but individually and collectively).
It doesn’t matter (as much) whether someone presents a good or bad argument for a thing. What ultimately matters is “is the thing true? Is it important? If it’s not true, is the problem the argument was trying to address important and are there other ways to address it?”
If a claim has some bad logic in it, but then you fix the logic and the claim makes sense, you should believe it, because, well, the improved claim makes sense. (You should continue to not believe the original claim with the broken logic, because it had broken logic)
It sounds like you’re worried about times when you might think you’re doing that but are in fact just deluding yourself. (Which I agree is a bad thing that happens sometimes, but I don’t think Steelmanning makes you any more prone to that than arguing in the first place. I think it’s much more frequent for people to make intellectual mistakes by staying in “attack” mode than by being overly accommodating of people they disagree with)
If a claim has some bad logic in it, but then you fix the logic and the claim makes sense, you should believe it
Yes, I agree with that. However, I think it’s very easy to change the conclusion in the process of changing the inferential steps or the premises. If arguments were presented mathematically, using formal logic, I would have no objection to steelmanning. It would be obvious if the conclusion of an argument had changed in the process of fixing logic errors. However, we discuss in English, not math, and as a result I’m wary of engaging with anything other than the text as it is written. I do not have confidence in my ability to change my interlocutor’s argument while preserving its conclusion.
FWIW, while this isn’t steelmanning, this recent comment of yours seems to be doing the general motion I’m trying to point to here, of which steelmanning is a subset: you point out a flaw in someone’s argument, while acknowledging the underlying problem they’re trying to solve, and then contribute additional possible solutions. Constructive criticism rather than destructive.
(This is not me necessarily endorsing your solution in that comment, since it’s a complicated domain and I haven’t thought about it thoroughly myself, but the comment is structured in a way that helps other people who join the discussion continue to operate in a “help figure out something useful rather than attack each other.”)
It doesn’t seem like steelmanning is particularly useful for communication
or critique.
It refines ideas into something different, more interesting,
drawing attention away from to the original.
This makes it useful for collaborative truth seeking,
or just for your own thinking based on what you read.
A useful notion of steelmanning needs to be distinguished from
charity and rationalization.
Charity is looking into the reasons a person believes or says the things they
do.
The beliefs themselves may be ridiculous and not useful to understand,
but the reasons for arriving at them point to a real process in someone’s
mind and may clarify the context where the beliefs come up.
Ignoring the beliefs as something you won’t accept is different from
ignoring the process that created them,
and charity is about paying attention to the process.
The reasons for holding a belief can be different from arguments given
for it,
and there is also a question about the reasons for arriving at certain
arguments.
Pursuing charity leads to identifying errors in thinking.
It’s also the right point of view on weaponized words
that turn out not to reflect beliefs in the usual sense, but serve a purpose,
even without knowledge of the people utilizing the words.
Steelmanning, on the other hand, acts on the beliefs themselves.
It brings to attention improved versions of the beliefs,
versions that may be more worthy of discussion than the original,
non-steelmanned beliefs.
So it’s a way of changing the topic to something occasionally more
interesting,
and in that it’s similar to charity, but it changes the topic
in a completely different way.
Rationalization is finding a convincing argument for a predefined position.
When the position is incorrect, even slightly, the arguments to choose from
are flawed, and the task is to find the most convincing of them.
The flaws are mostly about ignoring some evidence and giving too much
weight to other evidence,
although if the audience is not too discerning,
other flaws may allow the argument to become even more convincing.
Steelmanning of a belief discards the problem statement for rationalizing it
by changing the belief.
Steelmanning of an argument for a predefined belief
is almost exactly rationalization.
But steelmanning an argument without requiring its conclusion to come
out the same may be interesting, even as it changes the topic of discussion.
(Talking about the rules here is fine.)
To clarify, I’m not saying that Steelmanning is required. Only that a criticism that steelmans is, typically, more valuable than criticism that doesn’t.
The “Death by a Thousand Cuts” thing isn’t meant to be a hard-fast rule, it’s a judgment made in some cases. Most comments (critical or otherwise) have a number of things going on that involve multiple benefits and multiple costs. Some costs (and benefits) can aggregate over time, and might aggregate too much if done a lot in a short period of time. (Imagine a weapon in a video game that overheats, or pollution that isn’t too noticeable unless it crosses a particular density)
It’s hard to have an explicit rule here that isn’t gameable, but broadly, the more costs a set of comments is imposing, the more valuable it needs to be. (This is true both for claims as well as criticism)
(I am saying that, on the margin, it is better for LW and for the general “find useful true things” project to have more people training the skill of steelmanning-as-matter-of-course as they debate. The Steelmanning article advocates it for improving rationalist virtue. LW is about cultivating rationalist virtue. I think if you practice doing it regularly it stops being a weird extra thing you have to do and part of the normal flow of thinking, and that it’s net positive both for you and the people reading your criticism)
I disagree. Steelmanning is nice, but I don’t think it necessarily adds value. I think there is real value in engaging the actual arguments that the person made, in the way that they made them. If LessWrong is going to train rationalists to argue for their points persuasively, I think it’s imperative that we engage with the actual evidence that is presented, and not the idealized version of the evidence that would have convinced us of the conclusions.
Edit: After thinking about it some more, I have realized that steelmanning poses a danger to the listener as well as to the speaker. Namely, given two arguments of equal strength, one which I am able to steelman, and one which I am not, it’s quite possible I will find the argument that I am able to steelman more convincing, even though it has no more evidence behind it than the argument that I am not able to steelman. It seems to me that steelmanning exaggerates our cognitive blindspots, rather than reducing them. Can you show me that steelmanning is not an epistemic hazard?
Yup, see Ozy’s post Against Steelmanning and Eliezer’s fb post agreeing that starts “Be it clear: Steelmanning is not a tool of understanding and communication.”
Hmm. I’m not sure how much of this is difference-in-predicted-best strategy-for-truthseeking, difference in values, or unclarity around the term “Steelman”
I do think there are bad ways to Steelman, and I think there are times where Steelmanning isn’t the appropriate thing to do. But the way you’re using it it sounds like you mean “rationalize reasons something might be true”, as opposed to “think about the strongest version of an argument.”
Doesn’t really seem like what LessWrong should be trying to do, to me. The point here is to figure out useful true things (and to have a culture of people who are good at figuring out true things, but individually and collectively).
It doesn’t matter (as much) whether someone presents a good or bad argument for a thing. What ultimately matters is “is the thing true? Is it important? If it’s not true, is the problem the argument was trying to address important and are there other ways to address it?”
If a claim has some bad logic in it, but then you fix the logic and the claim makes sense, you should believe it, because, well, the improved claim makes sense. (You should continue to not believe the original claim with the broken logic, because it had broken logic)
It sounds like you’re worried about times when you might think you’re doing that but are in fact just deluding yourself. (Which I agree is a bad thing that happens sometimes, but I don’t think Steelmanning makes you any more prone to that than arguing in the first place. I think it’s much more frequent for people to make intellectual mistakes by staying in “attack” mode than by being overly accommodating of people they disagree with)
Yes, I agree with that. However, I think it’s very easy to change the conclusion in the process of changing the inferential steps or the premises. If arguments were presented mathematically, using formal logic, I would have no objection to steelmanning. It would be obvious if the conclusion of an argument had changed in the process of fixing logic errors. However, we discuss in English, not math, and as a result I’m wary of engaging with anything other than the text as it is written. I do not have confidence in my ability to change my interlocutor’s argument while preserving its conclusion.
FWIW, while this isn’t steelmanning, this recent comment of yours seems to be doing the general motion I’m trying to point to here, of which steelmanning is a subset: you point out a flaw in someone’s argument, while acknowledging the underlying problem they’re trying to solve, and then contribute additional possible solutions. Constructive criticism rather than destructive.
(This is not me necessarily endorsing your solution in that comment, since it’s a complicated domain and I haven’t thought about it thoroughly myself, but the comment is structured in a way that helps other people who join the discussion continue to operate in a “help figure out something useful rather than attack each other.”)
It doesn’t seem like steelmanning is particularly useful for communication or critique. It refines ideas into something different, more interesting, drawing attention away from to the original. This makes it useful for collaborative truth seeking, or just for your own thinking based on what you read.
A useful notion of steelmanning needs to be distinguished from charity and rationalization. Charity is looking into the reasons a person believes or says the things they do. The beliefs themselves may be ridiculous and not useful to understand, but the reasons for arriving at them point to a real process in someone’s mind and may clarify the context where the beliefs come up. Ignoring the beliefs as something you won’t accept is different from ignoring the process that created them, and charity is about paying attention to the process. The reasons for holding a belief can be different from arguments given for it, and there is also a question about the reasons for arriving at certain arguments. Pursuing charity leads to identifying errors in thinking. It’s also the right point of view on weaponized words that turn out not to reflect beliefs in the usual sense, but serve a purpose, even without knowledge of the people utilizing the words.
Steelmanning, on the other hand, acts on the beliefs themselves. It brings to attention improved versions of the beliefs, versions that may be more worthy of discussion than the original, non-steelmanned beliefs. So it’s a way of changing the topic to something occasionally more interesting, and in that it’s similar to charity, but it changes the topic in a completely different way.
Rationalization is finding a convincing argument for a predefined position. When the position is incorrect, even slightly, the arguments to choose from are flawed, and the task is to find the most convincing of them. The flaws are mostly about ignoring some evidence and giving too much weight to other evidence, although if the audience is not too discerning, other flaws may allow the argument to become even more convincing.
Steelmanning of a belief discards the problem statement for rationalizing it by changing the belief. Steelmanning of an argument for a predefined belief is almost exactly rationalization. But steelmanning an argument without requiring its conclusion to come out the same may be interesting, even as it changes the topic of discussion.