LessWrong [...] doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
I think it is strawmanning Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted. [...] I think the amount of strawmanning here is just not bad enough
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I don’t plan to do that, because I don’t think it’s that big a deal
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
I think Duncan thinks “Rationalist Discourse” Is Like “Physicist Motors” strawmans his position, and still gets mostly upvoted and if he wasn’t going out of his way to make this obvious, people wouldn’t notice. And when he does argue that this is happening, his comment doesn’t get upvoted much-at-all.
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(FYI @Vladimir_Nesov I’m curious if this sort of thing still feels ‘hair raisingly alarming’ to you)
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.