This isn’t a big deal if we treat steelmanning as niche, as a cool sideshow. But if we treat it as a fundamental conversational virtue, I think (to some nontrivial degree) it actively interferes with understanding and engaging with views you don’t agree with, especially ones based on background views that are very novel and foreign to you.
So, I’ve been ruminating on the steelmanning question for a couple of months, and your position still doesn’t sit easy with me.
Simply put, I think the steelmanning reflex is super important, and your model seems to downplay its importance a lot.
I’ll note that ‘steel-manning’ isn’t exclusively used for ‘someone else believes P; I should come up with better arguments for P, if their own arguments are insufficient’. It’s also used for:
Someone believes P; but P is obviously false, so I should come up with a new claim Q that’s more plausible and is similar to P in some way.
I basically think this is fine and good. My real question isn’t exactly “are the claims made in paper X true”—I’m generally equally interested in claims adjacent to the specific claims made in paper X. This is also usually true of the authors of the paper, although we will be missing a lot of specific information about which adjacent claims they were interested in.
I think all communication has an element of translation. I’m basically never interested in just learning to imitate the series of sounds the other person is making—I want to translate their thinking into my own ontology, and there’s an element of in-the-best-way-possible that’s critically necessary for this.
This doesn’t mean we should substitute the other person’s point for our own, and call it understanding. One reason why not: in order to do the translation task well, we should keep track of our current best translation, but we also need to keep track of the current biggest discordance in that translation.
For example, maybe someone (perhaps with a different dialect of English) is talking about “cake”, but keeps saying things that don’t quite make sense for cake, like mentioning putting it in their purse. We should keep track of various hypotheses about what “cake” really is, based on what would make the most sense. To take their words at face value instead would be obstinate; you’re refusing to work with the person to facilitate communication.
This is the sort of thing I worry about when I replace steelmanning with ITT-passing and civility.
All of this means that it’s hard to enforce a strict distinction (in real-world practice) between the norm ‘if you’re debating P with someone, generate and address the best counter-arguments against your view of P, not just the arguments your opponent mentioned’ and the norm ‘if someone makes a claim you find implausible, change the topic to discussing a different claim that you find more plausible’.
It seems to me like this is usually good rather than bad. Even amongst super technical folk, people don’t always state their claims precisely enough to support the kind of weight you’re giving them. If I provide a correction to someone’s statement-of-claim, based on what I find plausible, the correction is often accepted. In other cases, the person will let me know that no, they really did mean X, and yes, they really do think they can argue for X.
Stating what I find plausible can therefore be a quick way to check for common ground vs disagreement. Trying to prompt them to do this part is harder and more frustrating in my experience. Charitably offering a re-phrasing can be much more friendly and cooperative than trying to prod them to see why their own statement didn’t make sense. In my experience, people are more liable to clam up if you socratically try to get them to correct their own mistakes rather than offering the obvious corrections you see.
There’s another failure mode where a disagreement can go on for almost arbitrarily long if I don’t offer my steelman, because actually the other person would just agree with the steelman but I have a serious disagreement with the way they chose to word their argument. Without steelmanning, it seems like there’s a big potential for this type of conversation to get bogged down in terminological disputes. This is because I am, at base, disagreeing with how they’re putting it, not with their true anticipations. I’m currently in one ongoing discussion where I can pass their ITT pretty well, but I can’t steelman them; my lack of steelman seems like the bottleneck to understanding them.
For example, I once had an hour+ argument about free will with a more religious person. I argued in favor of compatibilism. They argued against. It felt like no progress was being made. At the very end of the discussion, they revealed that they think I had to be correct in some sense because although humans have free will, God also has perfect knowledge of the future. They also stated they were really only arguing against me because they thought some details of how I was putting my view forward were extremely misleading.
This seems to me like an excellent religious translation of the point I was trying to make. So it seems that the previous hour+ was somewhat wasted—they already agreed with the actual point I was making, they just thought that the way I was stating it (presumably, very shaped by a scientific worldview) was dangerously misleading and had all the wrong connotations.
If they had steelmanned me faster, we would have had common ground to work from in exploring the other disagreements, which probably would have been a lot more fruitful. Even though I would not have agreed with all the details of their steelman.
So, I’ve been ruminating on the steelmanning question for a couple of months, and your position still doesn’t sit easy with me.
Simply put, I think the steelmanning reflex is super important, and your model seems to downplay its importance a lot.
I basically think this is fine and good. My real question isn’t exactly “are the claims made in paper X true”—I’m generally equally interested in claims adjacent to the specific claims made in paper X. This is also usually true of the authors of the paper, although we will be missing a lot of specific information about which adjacent claims they were interested in.
I think all communication has an element of translation. I’m basically never interested in just learning to imitate the series of sounds the other person is making—I want to translate their thinking into my own ontology, and there’s an element of in-the-best-way-possible that’s critically necessary for this.
This doesn’t mean we should substitute the other person’s point for our own, and call it understanding. One reason why not: in order to do the translation task well, we should keep track of our current best translation, but we also need to keep track of the current biggest discordance in that translation.
For example, maybe someone (perhaps with a different dialect of English) is talking about “cake”, but keeps saying things that don’t quite make sense for cake, like mentioning putting it in their purse. We should keep track of various hypotheses about what “cake” really is, based on what would make the most sense. To take their words at face value instead would be obstinate; you’re refusing to work with the person to facilitate communication.
This is the sort of thing I worry about when I replace steelmanning with ITT-passing and civility.
It seems to me like this is usually good rather than bad. Even amongst super technical folk, people don’t always state their claims precisely enough to support the kind of weight you’re giving them. If I provide a correction to someone’s statement-of-claim, based on what I find plausible, the correction is often accepted. In other cases, the person will let me know that no, they really did mean X, and yes, they really do think they can argue for X.
Stating what I find plausible can therefore be a quick way to check for common ground vs disagreement. Trying to prompt them to do this part is harder and more frustrating in my experience. Charitably offering a re-phrasing can be much more friendly and cooperative than trying to prod them to see why their own statement didn’t make sense. In my experience, people are more liable to clam up if you socratically try to get them to correct their own mistakes rather than offering the obvious corrections you see.
There’s another failure mode where a disagreement can go on for almost arbitrarily long if I don’t offer my steelman, because actually the other person would just agree with the steelman but I have a serious disagreement with the way they chose to word their argument. Without steelmanning, it seems like there’s a big potential for this type of conversation to get bogged down in terminological disputes. This is because I am, at base, disagreeing with how they’re putting it, not with their true anticipations. I’m currently in one ongoing discussion where I can pass their ITT pretty well, but I can’t steelman them; my lack of steelman seems like the bottleneck to understanding them.
For example, I once had an hour+ argument about free will with a more religious person. I argued in favor of compatibilism. They argued against. It felt like no progress was being made. At the very end of the discussion, they revealed that they think I had to be correct in some sense because although humans have free will, God also has perfect knowledge of the future. They also stated they were really only arguing against me because they thought some details of how I was putting my view forward were extremely misleading.
This seems to me like an excellent religious translation of the point I was trying to make. So it seems that the previous hour+ was somewhat wasted—they already agreed with the actual point I was making, they just thought that the way I was stating it (presumably, very shaped by a scientific worldview) was dangerously misleading and had all the wrong connotations.
If they had steelmanned me faster, we would have had common ground to work from in exploring the other disagreements, which probably would have been a lot more fruitful. Even though I would not have agreed with all the details of their steelman.