But why didn’t A just frame her argument in objective, consequentialist terms? Do we assume that what she wrote was sort of a telephone-game approximation of what was originally a highly logical consequentialist argument? If so where can I find that argument? And if not, why are we assuming that A is a crypto-consequentialist when she probably isn’t?
I never thought that steelmanning implied necessarily assuming that A would agree with the steelmanned version. If A says something that seems to have a reasonable point behind it but is expressed badly, then yes, in that case the steelmanned version can be something that they’d agree with. But they might also say something that was obviously wrong and not worth engaging with—but which nonetheless sparked an idea about something that was more reasonable, and which might be interesting to discuss.
In either case, we’ve replaced a bad argument with a better one that seems worth considering and discussing. Whether or not A really intended the argument to be understood like that doesn’t matter that much.
All outcomes are correlated with causes; most statements are evidence of something. Michael Vassar once gave the example of a tribe of people who thought that faeries existed, lived in a nearby forest, and you could see them once you became old enough. It later turned out that the tribe had a hereditary eye disease which caused them to see things from the corners of their eyes once they got old. The tribe’s theory of what was going on was wrong, but it was still based on some true data about the real world. A scientifically minded person could have figured out what was going on, by being sufficiently curious about the data that generated that belief.
If the person giving the original argument is the tribe, the original argument is “faeries exist”, and the steelmanned argument is “these people carry the genes for a hereditary eye disease”, then our steelmanned version certainly isn’t what the tribe originally intended. But what does it matter? Steelmanning their argument still gave us potentially useful information.
I never thought that steelmanning implied necessarily assuming that A would agree with the steelmanned version. If A says something that seems to have a reasonable point behind it but is expressed badly, then yes, in that case the steelmanned version can be something that they’d agree with. But they might also say something that was obviously wrong and not worth engaging with—but which nonetheless sparked an idea about something that was more reasonable, and which might be interesting to discuss.
In either case, we’ve replaced a bad argument with a better one that seems worth considering and discussing. Whether or not A really intended the argument to be understood like that doesn’t matter that much.
To take a more concrete example, in What Data Generated That Thought?, I wrote:
If the person giving the original argument is the tribe, the original argument is “faeries exist”, and the steelmanned argument is “these people carry the genes for a hereditary eye disease”, then our steelmanned version certainly isn’t what the tribe originally intended. But what does it matter? Steelmanning their argument still gave us potentially useful information.