I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
… that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would’ve been less likely to leave and would be more likely to return with marginal movement in that direction.
I don’t know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like “how would you have felt if we had moved 25% in this direction,” I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more “what? No, we’re well-adapted to the current environment; we’re the ones who’ve been filtered for.”
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
Nod. I want to clarify, the diff I’m asking about and being skeptical about is “assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’”
i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I’d want is something like “given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular”.
I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate?
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what’s bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you’re assigned, you also fractionally encourage everyone else in your role to do the work they’re assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn’t large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).
I also don’t think there’s enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like “a poll” of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work
I’m not sure that’s true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don’t remember it.
LessWrong [...] doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
I think it is strawmanning Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted. [...] I think the amount of strawmanning here is just not bad enough
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I don’t plan to do that, because I don’t think it’s that big a deal
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
… that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would’ve been less likely to leave and would be more likely to return with marginal movement in that direction.
I don’t know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like “how would you have felt if we had moved 25% in this direction,” I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more “what? No, we’re well-adapted to the current environment; we’re the ones who’ve been filtered for.”
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
Nod. I want to clarify, the diff I’m asking about and being skeptical about is “assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’”
i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I’d want is something like “given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular”.
I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what’s bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you’re assigned, you also fractionally encourage everyone else in your role to do the work they’re assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn’t large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).
I also don’t think there’s enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like “a poll” of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
I’m not sure that’s true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don’t remember it.
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
I think Duncan thinks “Rationalist Discourse” Is Like “Physicist Motors” strawmans his position, and still gets mostly upvoted and if he wasn’t going out of his way to make this obvious, people wouldn’t notice. And when he does argue that this is happening, his comment doesn’t get upvoted much-at-all.
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(FYI @Vladimir_Nesov I’m curious if this sort of thing still feels ‘hair raisingly alarming’ to you)
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.