Preliminary Verdict (but not “operationalization” of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
credibly commit to changing their behavior in a fairly significant way,
or, accept some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior.
or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)
(After the two comments they can continue to PM the LW team, although we’ll have some limit on how much time we’re going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I’d be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of “we learned new useful things about site governance”) there’s a limit to how much it’s worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly suppor rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I’ll be aiming for here is “make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can’t do the cluster of things they want to do that feels damaging to me.”
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I’d be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There’s enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who’s more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I’ve gotten about LessWrong, from both new users as well as established, generally highly regarded users, is “too many nitpicky comments that feel like they’re missing the point”. I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it’s still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We’ve had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he’s sticking around, I think we’d need some kind of tech solution. The outcome I want here is that in practice Said doesn’t bother people who don’t want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I’m skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
Easily-triggered-rate-limiting. I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day. I expect fine-tuning this to actually work the way I imagine in my head is a fair amount of work but not that much.
Proactive warning.If a post author has downvoted Said comments on their post multiple times, they get some kind of UI alert saying “Yo, FYI, admins have flagged this user as somewhat with a pattern of commenting that a lot of authors have found net-negative. You may want to take that into account when deciding how much to engage”.
There’s some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of “authors can ban users” is worth revisiting so my first impulse is to avoid investing in it further until we’ve had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask “Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?”. Here are some areas I think Said contributes in a way that seem important:
Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. (edit: as Ben Pace notes, this is pretty significant, and I agree with his note that “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”)
Most of his comments are in fact just pretty reasonable and good in a straightforward way.
While I don’t get much value out of protracted conversations about it, I do think there’s something valuable about Said being very resistant to getting swept up in fad ideas. Sometimes the emperor in fact really does have no clothes. Sometimes the emperor has clothes, but you really haven’t spelled out your assumptions very well and are confused about how to operationalize your idea. I do think this is pretty important and would prefer Said to somehow “only do the good version of this”, but seems fine to accept it as a package-deal.
Re: Duncan
I’ve spent years trying to hash out “what exactly is the subtle but deep/huge difference between Duncan’s moderation preferences and the LW teams.” I have found each round of that exchange valuable, but typically it didn’t turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here’s this month/year’s stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to “knock it off” quickly. But,
a) moderation time is limited b) even in the world where we massively invest in moderation… the thing Duncan cares most about moderating quickly just doesn’t seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don’t Exist, Duncan getting as heavily upvoted as it did, so I think it’s plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven’t actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don’t necessarily agree on how. It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don’t know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it’s fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I’d have are:
people on LessWrong feel safe that they aren’t likely to get into sudden, protracted conflict with Duncan that persists outside his own posts.
the LessWrong team and Duncan are on-the-same-page about LW team not being willing to allocate dozens of hours of attention at a moments notice in the specific ways Duncan wants. I don’t think it’s accurate to say “there’s no lifeguard on duty”, but I think it’s quite accurate to say that the lifeguard on duty isn’t planning to prioritize the things Duncan wants, so, Duncan should basically participate on LessWrong as if there is, in effect “no lifeguard” from his perspective. I’m spending ~40 hours this week processing this situation with a goal of basically not having to do that again.
In the past Duncan took down all his LW posts when LW seemed to be actively hurting him. I’ve asked him about this in the past year, and (I think?) he said he was confident that he wouldn’t. One thing I’d want going forward is a more public comment that, if he’s going to keep posting on LessWrong, he’s not going to do that again. (I don’t mind him taking down 1-2 problem posts that led to really frustrating commenting experiences for him, but if he were likely to take all the posts down that undercuts much of the value of having him here contributing)
FWIW I do think it’s moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It’s maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it’s kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it’s worth the LW team having a dedicated “our frame on what the site norms are” anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I’ll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven’t really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
… that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would’ve been less likely to leave and would be more likely to return with marginal movement in that direction.
I don’t know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like “how would you have felt if we had moved 25% in this direction,” I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more “what? No, we’re well-adapted to the current environment; we’re the ones who’ve been filtered for.”
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
Nod. I want to clarify, the diff I’m asking about and being skeptical about is “assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’”
i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I’d want is something like “given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular”.
I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate?
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what’s bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you’re assigned, you also fractionally encourage everyone else in your role to do the work they’re assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn’t large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).
I also don’t think there’s enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like “a poll” of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work
I’m not sure that’s true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don’t remember it.
LessWrong [...] doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
I think it is strawmanning Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted. [...] I think the amount of strawmanning here is just not bad enough
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I don’t plan to do that, because I don’t think it’s that big a deal
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.
I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn’t originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.
(This announcement is also rather hush-hush, it’s not a post and so I’ve only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
(Also, this announcement is also rather hush-hush, it’s not a post and so I’ve only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
It’s an update to me that you hadn’t seen it (I figured since you had replied to a bunch of other comments you were tracking the thread, and more generally figured that since there’s 360 comments on this thing it wasn’t suffering from lack-fo-scrutiny). But, plausible that we should pin it for a day when we make our next set of announcement comments (which are probably coming sometime this weekend, fwiw)
I meant this thread specifically, with the action announcement, not the post. The thread was started 4 days after the post, so everyone who wasn’t tracking the post had every opportunity to miss it. (It shouldn’t matter for the point about scrutiny that I in particular might’ve been expected to not miss it.)
Just want to note that I’m less happy with a lesswrong without Duncan. I very much value Duncan’s pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he’s doing. The fact that he’s being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it’s much clearer that his norms are operative, and I’ve been annoyed, but each of those times, despite being frustrated, I have found that I’m being pushed in the right direction and corrected for something I’m doing wrong.
I agree that it’s bad that his comments are often overly confrontational, but there’s no way to deliver constructive feedback that doesn’t involve a degree of confrontation, and I don’t see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I’d be happy to ask him to take a break. But this isn’t that world, and it seems much better to actively promote a norm of people saying they don’t have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad.
The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics
I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.
With regards to Said’s ‘general pattern’, I think there’s a dynamic around socially recognized gnosis where sometimes people will say “sorry, my inability/unwillingness to explain this to you is your problem” and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious—like polls or reacts or w/e—does seem good.
I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.
Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.
From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan’s post while Said and others were and continue to be banned from commenting on it.
From this point of view, I don’t see what either of Said or Duncan have done to justify any sort of ban, temporary or not.
This decision is based on mostly on past patterns with both of them, over the course of ~6 years.
The recent conflict, in isolation, is something where I’d kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people’s worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.
The motivation here is from a larger history. (I’ve summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)
And notably, my preference is for this not to result in a ban. I’m hoping we can work something out. The thing I’m laying down in this comment is “we do have to actually work something out.”
I condemn the restrictions on Said Achmiz’s speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question “credibly commit[ting] to changing their behavior in a fairly significant way”, “accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior”, or “be[ing] banned from commenting on other people’s posts”.
The first is a restriction on variety of speech. (I don’t see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won’t result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the “tech solution” of the second could be mere rate-limiting, but the “doesn’t depend on their continued behavior” clause makes me think something more onerous is intended.
(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don’t comment on the other case, but I’m deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)
The tech solution I’m currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe, he gets to comment more without limit.” But that’s fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”. The question here is “do you want the ‘real work’ of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can’t bother you and?” (which is what’s mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is “suddenly” in significant agreement about some “frame control” concept he’s never heard of. (I can’t find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella’s post. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I’m not sure if there’s a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)
So, I think the censorship policy you’re imagining is a fabricated option.
My current guess of actual next steps are “Said gets 3 comments per post per day” restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying “hey, we’ve made a bunch of changes, we’d like it if you came in and tried using the site again”, and hope that this time it actually works.
(Duncan might get a similar treatment, for fairly different reasons, although I’m more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)
a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”.
We already have a user-level personal ban feature! (Said doesn’t like it, but he can’t do anything about it.) Why isn’t the solution here just, “Users who don’t want to receive comments from Said ban him from their own posts”? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.) It’s true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that’s Elizabeth, and DirectedEvolution, and one other user).
I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe
This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn’t stop Said from posting a fourth comment; “to start” is not unconditional if it requires a human to approve the fourth comment.)
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account. Why is that? This seems like a question you should be able to answer.
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account.
Stipulating that votes on this comment are more than negligibly informative on this question… it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you’re counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but
1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
But we’ve had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don’t trust him to actually tell the difference in many edge cases.
We’ve spent a hundred+ person hours over the years thinking about how to limit Said’s damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
2. I did list “actually just encourage people to use the ban tool more” is an option. (DirectedEvolution didn’t even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn’t feel an obligation as a mod/site-admin to have a more open comment section)
3. I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
(I don’t expect that to really resolve your crux here but it seemed like it’s at least an improvement on the margin)
4. I think it’s plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don’t think this is the right call because I just think it’s… just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.
(This solution probably wouldn’t address my concerns about Duncan though)
If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
I am a little worried that this is a generalization that doesn’t line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I’m reluctant to suggest a lengthy evidence review, both because of the costs and because I’m somewhat uncertain of the benefits—if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say “actually Said isn’t annoying”, those authors are unlikely to find it convincing.)
In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that “Said can engage with open/curious/interpretative topics/posts in a productive way”, and should maybe try to figure out what was different that time.
I will probably build something that let’s people Opt Into More Said.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that’s currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.
There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider.
Excellent. I predict that Said wouldn’t be averse to voluntarily not commenting on “open/curious/cooperative” posts, or not commenting there in the kind of style that adherents of that culture dislike, so that “specifically banning Said” from that is an unnecessary caveat.
I did list “actually just encourage people to use the ban tool more” is an option. [...] If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
Well, I’m glad you’re telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said’s interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don’t like.
Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don’t regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that’s a threat to me and mine. A government that does that is not legitimate.
It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
So, usually when people make this kind of “hostile paraphrase” in an argument, I tend to take it in stride. I mostly regard it as “part of the game”: I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don’t tell people to be more charitable to me; I don’t ask them to pass my ideological Turing test; I just say, “That’s not what I meant,” and explain the idea again; I’m happy to do the extra work.
In this particular situation, I’m inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that “criticize without trying to figure out what the OP is about” is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said’s ideological Turing test?
I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
Right, so if someone complains about Said, point out that they’re free to strong-downvote him and that they’re free to ban him from their posts. That’s much less time-consuming than writing new code! (You’re welcome.)
If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style
Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don’t want on the website, but that’s not the same thing as decreeing that individuals must “integrat[e] the spirit-of-[your]-models into [their] commenting style”.) Was I mistaken about what your job is?
building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”
I am strongly opposed to this because I don’t think the proposed distinction cuts reality at the joints. (I’d be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)
We already let authors write their own moderation guidelines! It’s a blank text box! If someone happens to believe in this “cooperative vs. adversarial” false dichotomy, they can write about it in the text box! How is that not enough?
We already let authors write their own moderation guidelines! It’s a blank text box!
Because it’s a blank text box, it’s not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Also, moderation guidelines aren’t visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.
(I assume Said mostly uses GW, since he designed it.)
I’ve been busy, so hadn’t replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I’ve done that at least twice now in this thread, I’m trying to better but seems important for me to notice and pay attention to).
I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style” line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
i.e. this isn’t about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.
I expect you to still object to that for various reasons, and I think it’s reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
I’m still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it’s not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:
I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
This strikes me basically as a way to move the mod team’s role more into “setting good defaults” and less “setting the only way things work”. How much y’all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be?
Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.
(Or if this feels more like a Said and/or Duncan specific issue, make the options “Unlimited”, “Limited”, and “None/Banned” then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
There is definitely some term in the my / the mod team’s equation for “this user is providing a lot of valuable stuff that people want on the site”. But the high level call the moderation team is making is something like “maximize useful truths we’re figuring out”. Hearing about how many people are getting concrete value out of Said or Duncan’s comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don’t comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.
The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.
If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts.
Thank you for the complement!
With writing science commentary, my participation is contingent on there being a specific job to do (often, “dig up quotes from links and citations and provide context”) and a lively conversation. The units of work are bite-size. It’s easy to be useful and appreciated.
Writing posts is already relatively speaking not my strong suit. There’s no preselection on people being interested enough to drive a discussion, what makes a post “interesting” is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.
My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw—essentially failing to adopt the “referee” role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan’s posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of “danger” in responding to Duncan’s posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.
So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I’m not sure what other user you’re referring to besides Achmiz—it looks like there’s supposed to be another word between “about” and “and” in your first sentence, and between “about” and “could” in the last sentence of your second paragraph, but it’s not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author’s intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.
major Said defender Zack has written lots of well-regarded posts
I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I’m not claiming to be passing anyone’s ideological Turing test.)
The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn’t have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.
In my worldview, this is exactly how things are supposed to work. I didn’t have satisfactory replies to the critical comments. Of course that’s going to result in downvotes! Of course it made me a little bit sad that day! (By “conservation of expected feelings”: I would have felt a little bit happy if the post did well.) Of course I’m going to try not to write posts relevantly “like that” in the future!
I’ve been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don’t think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can’t quite permit myself to believe it, because it’s too uncharitable; I must not be understanding correctly.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 → Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement.
Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
YES. I think this is hugely important, and I think it’s a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they’re lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
There’s absolutely a middle ground. There are many times when I ask questions—let’s say of an academic author—where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it’s to ask questions to do things like:
Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
Ask about the points in the author’s argument I don’t fully understand, in case those turn out to be cruxes
Ask what they think about my counterargument, on the assumption that they’ve already thought about it and have a pretty good answer that I’m genuinely interested in hearing
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
And this attitude is particularly corrosive to feelings of trust, collaboration, “jamming together,” etc. … it’s like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn’t offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.
Which, yeah, that’s one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!
(I choose martial arts specifically because it’s a domain full of anti-epistemic garbage and claims that don’t pan out.)
But in practice, few people will participate in such a martial arts academy for long, and it’s not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.
You’re describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you’re right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can’t, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they’re an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren’t there in the first place. It’s definitely more challenging to jam with dissonant characters like that (especially if they’re dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it’s important to realize that the problem isn’t so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
Well, you mentioned the scenario as an illustration of a “particularly corrosive” attitude. It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy’s behavior is, how much of everyone’s time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.
Maybe “deeply dysfunctional” was going too far, but I don’t think it’s reasonable to call that “way overconfident/projection-y”. Nor does the difference between “deeply dysfunctional” and “moderately dysfunctional” matter for jimmy’s point.
votes
FYI, I’m inclined to upvote jimmy’s comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them). And your comment seems to be calling jimmy out inappropriately (as I’ve argued above), so I’m inclined to at least disagree-vote it.
“Let’s imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous” does not seem like a reasonable move to me.
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
But it’s not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they’d stand by themselves.
Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. (“The way to jam”?) But e.g. “the problem isn’t so much the difficulty as the inability to overcome the difficulty” seems… well, I’d say this is overstated too, but I do think it’s pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
So I don’t think it’s unreasonable that the parent got significantly upvoted, though I didn’t upvote it myself; and I don’t think it’s unreasonable that your correction didn’t, since it looks correct to me but like it’s not responding to the main point.
Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I’d need it spelled out in more detail.
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
FWIW, that is a claim I’m fully willing and able to justify. It’s hard to disclaim all the possible misinterpretations in a brief comment (e.g. “deeply” != “very”), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Yes, and that’s why I described the attitude as “dysfunctionally dissonant” (emphasis in original). It’s not a good way of challenging the instructors, and not the way I recommend behaving.
What I’m talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system’s combined dysfunction never becomes supercritical and instead decays towards productive cooperation.
it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
That’s certainly one possibility. But isn’t it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don’t see, and which justify the confidence level I display?
It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like “Can I pass his ITT”/”Can I point to a flaw in his argument that makes him stutter if not change his mind”/etc.
To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn’t appear to come out of nowhere, but I do believe I am able to justify what I’m saying very well and won’t hesitate to do so if anyone wants further explanation or sees something which doesn’t seem to fit. And hey, if it turns out I’m wrong about how well supported my perspective is, I promise not to be a poor sport about it.
jimmy above is exhibiting actually bad reasoning (à la representativeness)
In absence of an object level counterargument, this is textbook ad hominem. I won’t argue that there isn’t a place for that (or that it’s impossible that my reasoning is flawed), but I think it’s hard to argue that it isn’t premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn’t uncommon for some of it to be right to an extent, but it’s really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren’t well substantiated.
I see that you’ve deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I’m pushing back against some of the things you’re saying because I think that it’s important to do so, but I do not harbor any ill will towards you nor do I think what you said was “ridiculous”. I hope you come back.
Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.
I thought it was a reference to, among other things, this exchange where Said says one of Duncan’s Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you’re observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you’re correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like “spending down and or making a bet with a limited resource (maybe two specific resources of “trust in the mods” and “some groups of people’s willingness to put up with the site being optimized a way they think is wrong.”)
Despite that, I think it is the right call to limit Said significantly in some way, but I don’t think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
I don’t think we can make that many moderation calls on users this established that there [sic] this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by “that many” is greater than zero.
50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years
I don’t remember this. I feel like Aella’s post introduced the term?
A better example might be Circling, though I think Said might have had a point of it hadn’t been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on “what’s going on with Brent?” two years prior, as well as some other circumstances. We’d been talking about it internal at Lightcone/LessWrong during that time.
I think the term was getting used, but makes sense if you weren’t as involved in those conversations. (I just checked and there’s only one old internal lw-slack message about it from 2019, but it didn’t feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)
For the record, I think the value here is “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”, and I don’t think that comes across in this bullet.
Yeah I agree with this, and agree it’s worth emphasizing more. I’m updating the most recent announcement to indicate this more, since not everyone’s going to read everything in this thread.
I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day.
I feel like this incentivizes comments to be short, which doesn’t make them less aggravating to people. For example, IIRC people have complained about him commenting “Examples?”. This is not going to be hit hard by a rate limit.
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
I join Ray and Gwern in noting that asking for examples is generically good (and that I’ve never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:
If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say “Examples?” would go into the pile. But just encountering a handful of comments that just say “Examples?” would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn’t do their fair share of the labor.
“Do you have examples?” is one of the core, common, prosocial moves, and correctly so. It is a bid for the other person to put in extra work, but the scales of “are we both contributing?” don’t need to be balanced every three seconds, or even every conversation. Sometimes I’m the asker/learner and you’re the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.
The problem is not in asking someone to do a little labor on your behalf. It’s having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.
My recent experience has been that saying “this is half-baked” is not met with a subsequent shift in commentary, meeting the “Oh, I don’t have any yet, this is speculative, so YMMV” tone.
I think it would be nice if LW could have both tones:
I’m claiming this quite confidently; bring on the challenges, I’m ready to convince
I have a gesture in a direction I’m pretty sure has merit, but am not trying to e.g. claim that if others don’t update to my position they’re wrong; this is a sapling and I’d like help growing it, not help stepping on it.
Trying to do things in the latter tone on LW has felt, to me, extremely anti-rewarding of late, and I’m hoping that will change, because I think a lot of good work happens there. That’s not to say that the former tone is bad; it feels like they are twin pillars of intellectual progress.
Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good—I’d hate to think that’s no longer acceptable.
At the same time, there is less room for it as the community got much bigger, and I’d probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it’s an issue.
fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don’t seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)
I agree—but think that now, if and when similarly initial thoughts on a conceptual model are proposed, there is less ability or willingness to engage, especially with people who are fundamentally confused about some aspect of the issue. This is largely, I believe, due to the volume of new participants, and the reduced engagement for those types of posts.
I want to reiterate that I actually think the part where Said says “examples?” is basically just good (and is only bad insofar as it creates a looming worry of particular kinds of frustrating, unproductive and time-consuming conversations that are likely to follow in some subsets of discussions)
(edit: I actually am pretty frustrated that “examples?” became the go-to example people talked about and reified as a kinda rude thing Said did. I think I basically agree this process is good:
Alice → writes confident posts without examples
Bob → says “examples?”
Alice → either gives (at least one, and yeah ideally 3) examples, or says “Oh, I don’t have any yet, this is speculative, so YMMV”, or doesn’t reply but feels a bit chagrined.
I don’t think it’s “strong” evidence per se, but, it was evidence that something I’d previously thought was more of a specific pet-peeve of Duncan’s, was more objected to by more LessWrongfolk.
(Where the thing in question is something like “making sweeping ungrounded claims about other people… but in a sort of colloquial/hyperbolic way which most social norms don’t especially punish)
Some evidence for that, also seems likely to get upvoted on the basis of “well written and evocative of a difficult personal experience”, or people relate to being outliers and unusual even if they didn’t feel alienated and hurt in quite the same way. I’m unsure.
If the lifeguard isn’t on duty, then it’s useful to have the ability to be your own lifeguard.
I wanted to say that I appreciate the moderation style options and authors being able to delete and ban for their posts. While we’re talking about what to change and what isn’t working, I’d like to weigh in on the side of that being a good set of features that should be kept. Raemon, you’ve mentioned those features are there to be used. I’ve never used the capability and I’m still glad it exists. (I can barely use it actually.) Since site wide moderators aren’t going to intervene everywhere quickly (which I don’t think they should or even can, moderators are heavily outnumbered) then I think letting people moderate their local piece is good.
If I ran into lots of negative feedback I didn’t think was helpful and it wasn’t getting moderated by me or the site admins, I’d just move my writing to a blog on a different website where I could control things. Possibly I’d set up crossposting like Zvi or Jefftk and then ignore the LessWrong comment section. If lots of people do that, then we get the diaspora effect from late LessWrong 1.0. Having people at least crossposting to LessWrong seems good to me, since I like tools like the agreement karma and the tag upvotes. Basically, the BATNA for a writer who doesn’t like LessWrong’s comment section is Wordpress or Substack. Some writers you’d rather go elsewhere obviously, but Said and Duncan’s top level posts seem mostly a good fit here.
I do have a question about norm setting I’m curious about. If Duncan had titled his post “Duncan’s Basics of Rationalist Discourse” would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?
I do have a question about norm setting I’m curious about. If Duncan had titled his post “Duncan’s Basics of Rationalist Discourse” would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?
Yeah I think this’d be much less cause for concern. (I haven’t checked whether the rest of the post has anything else that felt LW-wide-police-y about it, I’d maybe have wanted a slightly different opening paragraph or something)
One thing I’d want going forward is a more public comment that, if he’s going to keep posting on LessWrong, he’s not going to do that (take down all his LW posts) again.
I think Duncan also posts all his articles on his own website, is this correct?
In that case, would it be okay to replace the article on LW with a link to Duncan’s website? So that the articles stay there, the comments stay here, the page with comments links the article, but the article does not link the page with comments.
I am not suggesting to do this. I am asking that if Duncan (or anyone else) hypothetically at some moment decided for whatever reason that he is uncomfortable with his articles being on LW, whether doing this (moving the articles elsewhere and replacing them with the links towards the new place) would be acceptable for you? Like, whether this could be a policy “if you decide to move away from LW, this is our preferred way to do it”.
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)
Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)
Obviously the exact ratio doesn’t have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.
I’m not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post.
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.
the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.
counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
I don’t see all comments as criticism. Many comments are of the building up variety! It’s that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one—the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it’s dialog. and there are lot of unproductive examples for that in LW. and it’s quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can’t explain clearly. something like, I’m here to try to go higher. if it’s impossible, then why bother?
I also think it’s VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i’m just right now taking part in counter-example to “would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.”
i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like… is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it’s very unfriendly in a way that i find hard to describe.
like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don’t count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.
Yeah this is the sort of solution I’m thinking of (although it sounds like you’re maybe making a more sweeping assumption than me?)
My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I’m worried about (for users that seem particularly prone to causing demon threads)
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2).
Complaints by whom? And why are these complaints significant?
Are you taking the stance that all or most of these complaints are valid, i.e. that the things being complained about are clearly bad (and not merely dispreferred by this or that individual LW member)?
(See also this recent comment, where I argue that at least one particular characterization of my commenting activity is just demonstrably inconsistent with reality.)
Here’s a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren’t on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can’t have particularly important types of conversations here.
I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)
I think there’s probably at least 5 more people who complained about you by name who I don’t think have particularly legible credibility beyond “being some LessWrong users.”
I’m thinking about my reply to “are the complaints valid tho?”. I have a different ontology here.
There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.
I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I’m sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.
Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it’s worth putting in effort here to figure out if things could be better.
There being a lot of complaints is evidence [...] that it’s worth putting in effort here to figure out if things could be better.
It is evidence that there is some sort of problem. It’s not clear evidence about what should be done about it, about what “better” means specifically. Instituting ways of not talking about the problem anymore doesn’t help with addressing it.
It didn’t seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
If we speak precisely… in what way would they be the former without being the latter? Like, if I now think it’s more worth figuring out whether things could be better, presumably that’s because I now think it’s more likely that things could be better?
(I suppose I could also now think the amount-they-could-be-better, conditional on them being able to be better, is higher; but the probability that they could be better is unchanged. Or I could think that we’re currently acting under the assumption that things could be better, I now think that’s less likely so more worth figuring out whether the assumption is wrong. Neither seems like they fit in this case.)
Separately, I think my model of Said would say that he was not complaining, he was merely asking questions (perhaps to try to decide whether there was something to complain about, though “complain” has connotations there that my model of Said would object to).
So, if you think the mods are doing something that you think they shouldn’t be, you should probably feel free to say that (though I think there are better and worse ways to do so).
But if you think Said thinks the mods are doing something that Said thinks they shouldn’t be… idk, it feels against-the-spirit-of-Said to try to infer that from his comment? Like you’re doing the interpretive labor that he specifically wants people not to do.
My comment wasn’t well written, I shouldn’t have used the word “complaining” in reference to what Said was doing. To clarify:
As I see it, there are two separate claims:
That the complaints prove that Said has misbehaved (at least a little bit)
That the complaints increase the probability that Said has misbehaved
Said was just asking questions—but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.
Jefftk seems to be speaking about claim 2. So, his comment doesn’t seem like a direct response to Said’s comment, although the point is still a relevant one.
Preliminary Verdict (but not “operationalization” of verdict)
tl;dr – @Duncan_Sabien and @Said Achmiz each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:
credibly commit to changing their behavior in a fairly significant way,
or, accept some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior.
or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)
(After the two comments they can continue to PM the LW team, although we’ll have some limit on how much time we’re going to spend negotiating)
Some background:
Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I’d be sad to see go.
The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of “we learned new useful things about site governance”) there’s a limit to how much it’s worth moderating or mediating conflict re: two particular users.
So, something pretty significant needs to change.
A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly suppor rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them.
I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.
I still don’t really know what to do, but I want to flag that the the goal I’ll be aiming for here is “make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can’t do the cluster of things they want to do that feels damaging to me.”
If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I’d be pretty sad to resort to that option). I have some hope tech-based solutions can work
(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There’s enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who’s more legitimately aggrieved this particular week)
Re: Said:
One of the most common complaints I’ve gotten about LessWrong, from both new users as well as established, generally highly regarded users, is “too many nitpicky comments that feel like they’re missing the point”. I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it’s still an important/valid complaint.
Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)
We’ve had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he’s sticking around, I think we’d need some kind of tech solution. The outcome I want here is that in practice Said doesn’t bother people who don’t want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I’m skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)
Here are a couple ideas:
Easily-triggered-rate-limiting. I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day. I expect fine-tuning this to actually work the way I imagine in my head is a fair amount of work but not that much.
Proactive warning. If a post author has downvoted Said comments on their post multiple times, they get some kind of UI alert saying “Yo, FYI, admins have flagged this user as somewhat with a pattern of commenting that a lot of authors have found net-negative. You may want to take that into account when deciding how much to engage”.
There’s some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of “authors can ban users” is worth revisiting so my first impulse is to avoid investing in it further until we’ve had some more top-level discussion about the feature.
Why is it worth this effort?
You might ask “Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?”. Here are some areas I think Said contributes in a way that seem important:
Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. (edit: as Ben Pace notes, this is pretty significant, and I agree with his note that “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”)
Most of his comments are in fact just pretty reasonable and good in a straightforward way.
While I don’t get much value out of protracted conversations about it, I do think there’s something valuable about Said being very resistant to getting swept up in fad ideas. Sometimes the emperor in fact really does have no clothes. Sometimes the emperor has clothes, but you really haven’t spelled out your assumptions very well and are confused about how to operationalize your idea. I do think this is pretty important and would prefer Said to somehow “only do the good version of this”, but seems fine to accept it as a package-deal.
Re: Duncan
I’ve spent years trying to hash out “what exactly is the subtle but deep/huge difference between Duncan’s moderation preferences and the LW teams.” I have found each round of that exchange valuable, but typically it didn’t turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.
I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.
Here’s this month/year’s stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to “knock it off” quickly. But,
a) moderation time is limited
b) even in the world where we massively invest in moderation… the thing Duncan cares most about moderating quickly just doesn’t seem like it should necessarily be at the top of the priority queue to me?
I was surprised and updated on You Don’t Exist, Duncan getting as heavily upvoted as it did, so I think it’s plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative).
I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven’t actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)
I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy), but don’t necessarily agree on how. It’d be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.
I don’t know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false.
I continue to think it’s fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)
Some goals I’d have are:
people on LessWrong feel safe that they aren’t likely to get into sudden, protracted conflict with Duncan that persists outside his own posts.
the LessWrong team and Duncan are on-the-same-page about LW team not being willing to allocate dozens of hours of attention at a moments notice in the specific ways Duncan wants. I don’t think it’s accurate to say “there’s no lifeguard on duty”, but I think it’s quite accurate to say that the lifeguard on duty isn’t planning to prioritize the things Duncan wants, so, Duncan should basically participate on LessWrong as if there is, in effect “no lifeguard” from his perspective. I’m spending ~40 hours this week processing this situation with a goal of basically not having to do that again.
In the past Duncan took down all his LW posts when LW seemed to be actively hurting him. I’ve asked him about this in the past year, and (I think?) he said he was confident that he wouldn’t. One thing I’d want going forward is a more public comment that, if he’s going to keep posting on LessWrong, he’s not going to do that again. (I don’t mind him taking down 1-2 problem posts that led to really frustrating commenting experiences for him, but if he were likely to take all the posts down that undercuts much of the value of having him here contributing)
FWIW I do think it’s moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse and integrating it into our overall moderation policy. (It’s maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it’s kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it’s worth the LW team having a dedicated “our frame on what the site norms are” anyway)
In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.
I’ll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven’t really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here.
I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.
I note re:
… that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would’ve been less likely to leave and would be more likely to return with marginal movement in that direction.
I don’t know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like “how would you have felt if we had moved 25% in this direction,” I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more “what? No, we’re well-adapted to the current environment; we’re the ones who’ve been filtered for.”
(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)
Nod. I want to clarify, the diff I’m asking about and being skeptical about is “assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn’t especially prioritize the cluster of areas around ‘strawmanning being considered especially bad’ and ‘making unfounded statements about a person’s inner state’”
i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I’d want is something like “given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular”.
I agree some manner of poll in this space would be good, if we could implement it.
FWIW, I don’t avoid posting because of worries of criticism or nitpicking at all. I can’t recall a moment that’s ever happened.
But I do avoid posting once in a while, and avoid commenting, because I don’t always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.
If I’d been on Lesswrong a lot 10 years ago, this wouldn’t stop me much. I used to be very… well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.
But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn’t worth the effort in most cases.
From what I’ve observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more “heavy handed” moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.
I’m not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special “Flag a moderator” button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There’s probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you’re ruled against.
Overall I don’t want to overpromise something like “if LW has a stronger concentration of force expectation for good conversation norms I’d participate 100x more instead of just reading.” But 10x more to begin with, certainly, and maybe more than that over time.
This is similar to the idea for the Sunshine Regiment from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what’s bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you’re assigned, you also fractionally encourage everyone else in your role to do the work they’re assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)
It ended up running into the problem that, actually there weren’t all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn’t large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).
I also don’t think there’s enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like “a poll” of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)
All good points, and yeah I did consider the issue of “appeals” but considered “accept the judgement you get” part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.
But I’m glad the “pool of people” variation was tried, even if it wasn’t sustainable as volunteer work.
I’m not sure that’s true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don’t remember it.
You mean it’s considered a reasonable thing to aspire to, and just hasn’t reached the top of the list of priorities? This would be hair-raisingly alarming if true.
I’m not sure I parse this. I’d say yes, it’s a reasonable thing to aspire to and hasn’t reached the top of (the moderator/admins) priorities. You say “that would be alarming”, and infer… something?
I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?
(I’m about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I’m wrong)
I think Duncan thinks “Rationalist Discourse” Is Like “Physicist Motors” strawmans his position, and still gets mostly upvoted and if he wasn’t going out of his way to make this obvious, people wouldn’t notice. And when he does argue that this is happening, his comment doesn’t get upvoted much-at-all.
You might just say “well, Duncan is wrong about whether this is strawmanning”. I think it is [edit for clarity: somehow] strawmanning, but Zack’s post still has some useful frames and it’s reasonable for it to be fairly upvoted.
I think if I were to try say “knock it off, here’s a warning” the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don’t do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment)
It’s a bad thing to institute policies when missing good proxies. Doesn’t matter if the intended objective is good, a policy that isn’t feasible to sanely execute makes things worse.
Whether statements about someone’s inner state are “unfounded” or whether something is a “strawman” is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.
But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don’t see a principled difference. People should be allowed to be wrong, that’s the only way to notice being right based on observation of arguments (as opposed to by thinking on your own).
(So I think it’s not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It’s bad on both levels, hence “hair-raisingly alarming”.)
I’m actually still kind of confused about what you’re saying here (and in particular whether you think the current moderator policy of “don’t get involved most of the time” is correct)
You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).
(In the above two comments, I’m not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn’t seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I’m not averse to re-injecting the context into their discussion. But I won’t necessarily find that interesting or have things to say on.)
So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack and Said are gesturing at some of the moderators’ arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions (a matter of integrity) or to implicitly take into account some piece of alleged knowledge. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.
Okay, gotcha, I had not understood that. (Vaniver’s comment elsethread had also cleared this up for me I just hadn’t gotten around to replying to it yet)
One thing about “not close to the top of our list of priorities” means is that I haven’t actually thought that much about the issue in general. On the issue of “do LessWrong moderators think they should respond to strawmanning?” (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I’d say something like:
I don’t think it makes sense for moderators to have a “policy against strawmanning”, in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is “when we notice someone strawmanning, make a comment saying ‘hey, this seems like strawmanning to me?’” (which we aren’t treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like “proactively noticing and responding to various fallacious arguments at scale.”
(FYI @Vladimir_Nesov I’m curious if this sort of thing still feels ‘hair raisingly alarming’ to you)
(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)
Why do you think it’s strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!
As I’ve explained, I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment, I gave two examples illustrating what I thought the relevant evidentiary standard looks like.
If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I’m willing to do your work for you. When I imagine being a lawyer hired to argue that “‘Rationalist Discourse’ Is Like ‘Physicist Motors’” engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that “if someone did [speak of ‘physicist motors’], you might quietly begin to doubt how much they really knew about physics”, and (b) the part where the author characterizes Bensinger’s “defeasible default” of “role-playing being on the same side as the people who disagree with you” as being what members of other intellectual communities would call “concern trolling.”
However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.
In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger’s knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), “concern-trolling” is pejorative term; it’s certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that’s not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as “concern trolling.” I continue to maintain that this is true.
Regarding another user’s claim that the “entire post” in question “is an overt strawman”, that accusation was rebutted in the comments by both myself and Said Achmiz.
In conclusion, I stand by my post.
If you disagree with my analysis here, that’s fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it’s great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it’s bad when people make negative-valence claims about my work that they don’t argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I’ve done in this comment).
I meant the primary point of my previous comment to be “Duncan’s accusation in that thread is below the threshold of ‘deserves moderator response’ (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don’t plan to do that, because I don’t think it’s that big a deal. (I edited the previous comment to say “kinda” strawmanning, to clarify the emphasis more)
My point here was just explaining to Vladimir why I don’t find it alarming that the LW team doesn’t prioritize strawmanning the way Duncan wants (I’m still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)
I see Vlad as saying “that it’s even on your priority list, given that it seems impossible to actually enforce, is worrying” not “it is worrying that it is low instead of high on your priority list.”
I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.
I don’t think moderators showing up and making and judgment and proclamation is the right answer. I’m more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts.
Just noting that “What specifically did it get wrong?” is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.
That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).
Given that public retraction, I’m considering going back and in fact answering the “what specifically” question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it’s just a question of whether it’s worth taking the time to write it out months later.)
I’m very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?
The author can say that a reader’s post is an inaccurate representation of the author’s ideas, but how can the author possibly read the reader’s mind and conclude that the reader is doing it on purpose? Isn’t that a claim that requires exceptional evidence?
Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won’t matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).
I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author’s intent or the majority of readers’ understanding), rather than their intent (e.g. saying someone is strawmanning).
To be against both strawmanning (with weak evidence) and ‘making unfounded statements about a person’s inner state’ seems to me like a self-contradictory and inconsistent stance.
I think Said and Duncan are clearly channeling this conflict, but the confict is not about them, and doesn’t originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.
(This announcement is also rather hush-hush, it’s not a post and so I’ve only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)
It’s an update to me that you hadn’t seen it (I figured since you had replied to a bunch of other comments you were tracking the thread, and more generally figured that since there’s 360 comments on this thing it wasn’t suffering from lack-fo-scrutiny). But, plausible that we should pin it for a day when we make our next set of announcement comments (which are probably coming sometime this weekend, fwiw)
I meant this thread specifically, with the action announcement, not the post. The thread was started 4 days after the post, so everyone who wasn’t tracking the post had every opportunity to miss it. (It shouldn’t matter for the point about scrutiny that I in particular might’ve been expected to not miss it.)
Just want to note that I’m less happy with a lesswrong without Duncan. I very much value Duncan’s pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he’s doing. The fact that he’s being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it’s much clearer that his norms are operative, and I’ve been annoyed, but each of those times, despite being frustrated, I have found that I’m being pushed in the right direction and corrected for something I’m doing wrong.
I agree that it’s bad that his comments are often overly confrontational, but there’s no way to deliver constructive feedback that doesn’t involve a degree of confrontation, and I don’t see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I’d be happy to ask him to take a break. But this isn’t that world, and it seems much better to actively promote a norm of people saying they don’t have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad.
I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.
With regards to Said’s ‘general pattern’, I think there’s a dynamic around socially recognized gnosis where sometimes people will say “sorry, my inability/unwillingness to explain this to you is your problem” and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious—like polls or reacts or w/e—does seem good.
I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.
Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.
From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan’s post while Said and others were and continue to be banned from commenting on it.
From this point of view, I don’t see what either of Said or Duncan have done to justify any sort of ban, temporary or not.
This decision is based on mostly on past patterns with both of them, over the course of ~6 years.
The recent conflict, in isolation, is something where I’d kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people’s worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.
The motivation here is from a larger history. (I’ve summarized one chunk of that history from Said here, and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)
And notably, my preference is for this not to result in a ban. I’m hoping we can work something out. The thing I’m laying down in this comment is “we do have to actually work something out.”
I condemn the restrictions on Said Achmiz’s speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.
his speech is not being restricted in variety, it’s being ratelimited. the difference there is enormous.
Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question “credibly commit[ting] to changing their behavior in a fairly significant way”, “accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn’t depend on their continued behavior”, or “be[ing] banned from commenting on other people’s posts”.
The first is a restriction on variety of speech. (I don’t see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won’t result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the “tech solution” of the second could be mere rate-limiting, but the “doesn’t depend on their continued behavior” clause makes me think something more onerous is intended.
(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don’t comment on the other case, but I’m deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)
The tech solution I’m currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe, he gets to comment more without limit.” But that’s fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”. The question here is “do you want the ‘real work’ of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can’t bother you and?” (which is what’s mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is “suddenly” in significant agreement about some “frame control” concept he’s never heard of. (I can’t find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella’s post. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I’m not sure if there’s a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)
So, I think the censorship policy you’re imagining is a fabricated option.
My current guess of actual next steps are “Said gets 3 comments per post per day” restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying “hey, we’ve made a bunch of changes, we’d like it if you came in and tried using the site again”, and hope that this time it actually works.
(Duncan might get a similar treatment, for fairly different reasons, although I’m more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)
We already have a user-level personal ban feature! (Said doesn’t like it, but he can’t do anything about it.) Why isn’t the solution here just, “Users who don’t want to receive comments from Said ban him from their own posts”? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.) It’s true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that’s Elizabeth, and DirectedEvolution, and one other user).
This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn’t stop Said from posting a fourth comment; “to start” is not unconditional if it requires a human to approve the fourth comment.)
More generally, as a long-time user of Less Wrong (original join date 26 February 2009, author of five Curated posts) and preceding community (first Overcoming Bias comment 22 December 2007, attendee of the first Overcoming Bias meetup on 21 February 2008), I do not want Said Achmiz to be a second-class citizen in my garden. If we have a user-level personal ban feature that anyone can use, I might or might not think that’s a good feature to have, but at least it’s a feature that everyone can use; it doesn’t arbitrarily single out a single user on a site-wide basis.
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account. Why is that? This seems like a question you should be able to answer.
Stipulating that votes on this comment are more than negligibly informative on this question… it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you’re counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but
1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
But we’ve had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don’t trust him to actually tell the difference in many edge cases.
We’ve spent a hundred+ person hours over the years thinking about how to limit Said’s damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
2. I did list “actually just encourage people to use the ban tool more” is an option. (DirectedEvolution didn’t even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn’t feel an obligation as a mod/site-admin to have a more open comment section)
3. I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
(I don’t expect that to really resolve your crux here but it seemed like it’s at least an improvement on the margin)
4. I think it’s plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don’t think this is the right call because I just think it’s… just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.
(This solution probably wouldn’t address my concerns about Duncan though)
I am a little worried that this is a generalization that doesn’t line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I’m reluctant to suggest a lengthy evidence review, both because of the costs and because I’m somewhat uncertain of the benefits—if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say “actually Said isn’t annoying”, those authors are unlikely to find it convincing.)
In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that “Said can engage with open/curious/interpretative topics/posts in a productive way”, and should maybe try to figure out what was different that time.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that’s currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.
There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.
Excellent. I predict that Said wouldn’t be averse to voluntarily not commenting on “open/curious/cooperative” posts, or not commenting there in the kind of style that adherents of that culture dislike, so that “specifically banning Said” from that is an unnecessary caveat.
Well, I’m glad you’re telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said’s interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don’t like.
Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don’t regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that’s a threat to me and mine. A government that does that is not legitimate.
So, usually when people make this kind of “hostile paraphrase” in an argument, I tend to take it in stride. I mostly regard it as “part of the game”: I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don’t tell people to be more charitable to me; I don’t ask them to pass my ideological Turing test; I just say, “That’s not what I meant,” and explain the idea again; I’m happy to do the extra work.
In this particular situation, I’m inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that “criticize without trying to figure out what the OP is about” is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said’s ideological Turing test?
Right, so if someone complains about Said, point out that they’re free to strong-downvote him and that they’re free to ban him from their posts. That’s much less time-consuming than writing new code! (You’re welcome.)
Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don’t want on the website, but that’s not the same thing as decreeing that individuals must “integrat[e] the spirit-of-[your]-models into [their] commenting style”.) Was I mistaken about what your job is?
I am strongly opposed to this because I don’t think the proposed distinction cuts reality at the joints. (I’d be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)
We already let authors write their own moderation guidelines! It’s a blank text box! If someone happens to believe in this “cooperative vs. adversarial” false dichotomy, they can write about it in the text box! How is that not enough?
Because it’s a blank text box, it’s not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Also, moderation guidelines aren’t visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.
(I assume Said mostly uses GW, since he designed it.)
I’ve been busy, so hadn’t replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I’ve done that at least twice now in this thread, I’m trying to better but seems important for me to notice and pay attention to).
I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style” line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:
i.e. this isn’t about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.
I expect you to still object to that for various reasons, and I think it’s reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
FYI, my response to this is is waiting for an answer to my question in the first paragraph of this comment.
I’m still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it’s not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:
This strikes me basically as a way to move the mod team’s role more into “setting good defaults” and less “setting the only way things work”. How much y’all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be?
Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.
(Or if this feels more like a Said and/or Duncan specific issue, make the options “Unlimited”, “Limited”, and “None/Banned” then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
My prediction is that those users are primarily upvoting it for what it’s saying about Duncan rather than about Said.
To spell out what evidence I’m looking at:
There is definitely some term in the my / the mod team’s equation for “this user is providing a lot of valuable stuff that people want on the site”. But the high level call the moderation team is making is something like “maximize useful truths we’re figuring out”. Hearing about how many people are getting concrete value out of Said or Duncan’s comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don’t comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.
The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.
If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Thank you for the complement!
With writing science commentary, my participation is contingent on there being a specific job to do (often, “dig up quotes from links and citations and provide context”) and a lively conversation. The units of work are bite-size. It’s easy to be useful and appreciated.
Writing posts is already relatively speaking not my strong suit. There’s no preselection on people being interested enough to drive a discussion, what makes a post “interesting” is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.
My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw—essentially failing to adopt the “referee” role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan’s posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of “danger” in responding to Duncan’s posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.
So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I’m not sure what other user you’re referring to besides Achmiz—it looks like there’s supposed to be another word between “about” and “and” in your first sentence, and between “about” and “could” in the last sentence of your second paragraph, but it’s not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author’s intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.
I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I’m not claiming to be passing anyone’s ideological Turing test.)
The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn’t have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.
In my worldview, this is exactly how things are supposed to work. I didn’t have satisfactory replies to the critical comments. Of course that’s going to result in downvotes! Of course it made me a little bit sad that day! (By “conservation of expected feelings”: I would have felt a little bit happy if the post did well.) Of course I’m going to try not to write posts relevantly “like that” in the future!
I’ve been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don’t think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can’t quite permit myself to believe it, because it’s too uncharitable; I must not be understanding correctly.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 → Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement.
Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.
YES. I think this is hugely important, and I think it’s a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they’re lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
There’s absolutely a middle ground. There are many times when I ask questions—let’s say of an academic author—where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it’s to ask questions to do things like:
Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
Ask about the points in the author’s argument I don’t fully understand, in case those turn out to be cruxes
Ask what they think about my counterargument, on the assumption that they’ve already thought about it and have a pretty good answer that I’m genuinely interested in hearing
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
And this attitude is particularly corrosive to feelings of trust, collaboration, “jamming together,” etc. … it’s like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn’t offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.
Which, yeah, that’s one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!
(I choose martial arts specifically because it’s a domain full of anti-epistemic garbage and claims that don’t pan out.)
But in practice, few people will participate in such a martial arts academy for long, and it’s not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.
You’re describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you’re right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can’t, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they’re an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren’t there in the first place. It’s definitely more challenging to jam with dissonant characters like that (especially if they’re dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it’s important to realize that the problem isn’t so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Well, you mentioned the scenario as an illustration of a “particularly corrosive” attitude. It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy’s behavior is, how much of everyone’s time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.
Maybe “deeply dysfunctional” was going too far, but I don’t think it’s reasonable to call that “way overconfident/projection-y”. Nor does the difference between “deeply dysfunctional” and “moderately dysfunctional” matter for jimmy’s point.
FYI, I’m inclined to upvote jimmy’s comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them). And your comment seems to be calling jimmy out inappropriately (as I’ve argued above), so I’m inclined to at least disagree-vote it.
“Let’s imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous” does not seem like a reasonable move to me.
Separately:
https://www.lesswrong.com/posts/WsvpkCekuxYSkwsuG/overconfidence-is-deceit
I think my feeling here is:
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
But it’s not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they’d stand by themselves.
Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. (“The way to jam”?) But e.g. “the problem isn’t so much the difficulty as the inability to overcome the difficulty” seems… well, I’d say this is overstated too, but I do think it’s pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
So I don’t think it’s unreasonable that the parent got significantly upvoted, though I didn’t upvote it myself; and I don’t think it’s unreasonable that your correction didn’t, since it looks correct to me but like it’s not responding to the main point.
Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I’d need it spelled out in more detail.
FWIW, that is a claim I’m fully willing and able to justify. It’s hard to disclaim all the possible misinterpretations in a brief comment (e.g. “deeply” != “very”), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
Yes, and that’s why I described the attitude as “dysfunctionally dissonant” (emphasis in original). It’s not a good way of challenging the instructors, and not the way I recommend behaving.
What I’m talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system’s combined dysfunction never becomes supercritical and instead decays towards productive cooperation.
That’s certainly one possibility. But isn’t it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don’t see, and which justify the confidence level I display?
It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like “Can I pass his ITT”/”Can I point to a flaw in his argument that makes him stutter if not change his mind”/etc.
To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn’t appear to come out of nowhere, but I do believe I am able to justify what I’m saying very well and won’t hesitate to do so if anyone wants further explanation or sees something which doesn’t seem to fit. And hey, if it turns out I’m wrong about how well supported my perspective is, I promise not to be a poor sport about it.
In absence of an object level counterargument, this is textbook ad hominem. I won’t argue that there isn’t a place for that (or that it’s impossible that my reasoning is flawed), but I think it’s hard to argue that it isn’t premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn’t uncommon for some of it to be right to an extent, but it’s really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren’t well substantiated.
I see that you’ve deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I’m pushing back against some of the things you’re saying because I think that it’s important to do so, but I do not harbor any ill will towards you nor do I think what you said was “ridiculous”. I hope you come back.
I thought it was a reference to, among other things, this exchange where Said says one of Duncan’s Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you’re observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you’re correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like “spending down and or making a bet with a limited resource (maybe two specific resources of “trust in the mods” and “some groups of people’s willingness to put up with the site being optimized a way they think is wrong.”)
Despite that, I think it is the right call to limit Said significantly in some way, but I don’t think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by “that many” is greater than zero.
I don’t remember this. I feel like Aella’s post introduced the term?
A better example might be Circling, though I think Said might have had a point of it hadn’t been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on “what’s going on with Brent?” two years prior, as well as some other circumstances. We’d been talking about it internal at Lightcone/LessWrong during that time.
Hmm, yeah, I can see that. Perhaps just not under that name.
I think the term was getting used, but makes sense if you weren’t as involved in those conversations. (I just checked and there’s only one old internal lw-slack message about it from 2019, but it didn’t feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)
Ray writes:
For the record, I think the value here is “Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world”, and I don’t think that comes across in this bullet.
Yeah I agree with this, and agree it’s worth emphasizing more. I’m updating the most recent announcement to indicate this more, since not everyone’s going to read everything in this thread.
Great!
I feel like this incentivizes comments to be short, which doesn’t make them less aggravating to people. For example, IIRC people have complained about him commenting “Examples?”. This is not going to be hit hard by a rate limit.
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
Spending my last remaining comment here.
I join Ray and Gwern in noting that asking for examples is generically good (and that I’ve never felt or argued to the contrary). Since my stance on this was called into question, I elaborated:
My recent experience has been that saying “this is half-baked” is not met with a subsequent shift in commentary, meeting the “Oh, I don’t have any yet, this is speculative, so YMMV” tone.
I think it would be nice if LW could have both tones:
I’m claiming this quite confidently; bring on the challenges, I’m ready to convince
I have a gesture in a direction I’m pretty sure has merit, but am not trying to e.g. claim that if others don’t update to my position they’re wrong; this is a sapling and I’d like help growing it, not help stepping on it.
Trying to do things in the latter tone on LW has felt, to me, extremely anti-rewarding of late, and I’m hoping that will change, because I think a lot of good work happens there. That’s not to say that the former tone is bad; it feels like they are twin pillars of intellectual progress.
Noting that my very first lesswrong post, back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl corrected me. As an introduction to posting on LW, that was pretty good—I’d hate to think that’s no longer acceptable.
At the same time, there is less room for it as the community got much bigger, and I’d probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it’s an issue.
fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don’t seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)
I agree—but think that now, if and when similarly initial thoughts on a conceptual model are proposed, there is less ability or willingness to engage, especially with people who are fundamentally confused about some aspect of the issue. This is largely, I believe, due to the volume of new participants, and the reduced engagement for those types of posts.
I want to reiterate that I actually think the part where Said says “examples?” is basically just good (and is only bad insofar as it creates a looming worry of particular kinds of frustrating, unproductive and time-consuming conversations that are likely to follow in some subsets of discussions)
(edit: I actually am pretty frustrated that “examples?” became the go-to example people talked about and reified as a kinda rude thing Said did. I think I basically agree this process is good:
Alice → writes confident posts without examples
Bob → says “examples?”
Alice → either gives (at least one, and yeah ideally 3) examples, or says “Oh, I don’t have any yet, this is speculative, so YMMV”, or doesn’t reply but feels a bit chagrined.
)
Oops, sorry for saying something that probabilistically implied a strawman of you.
I’m not sure what you think this is strong evidence of?
I don’t think it’s “strong” evidence per se, but, it was evidence that something I’d previously thought was more of a specific pet-peeve of Duncan’s, was more objected to by more LessWrongfolk.
(Where the thing in question is something like “making sweeping ungrounded claims about other people… but in a sort of colloquial/hyperbolic way which most social norms don’t especially punish)
Some evidence for that, also seems likely to get upvoted on the basis of “well written and evocative of a difficult personal experience”, or people relate to being outliers and unusual even if they didn’t feel alienated and hurt in quite the same way. I’m unsure.
I upvoted it because it made me finally understand what in the world might be going on in Duncan’s head to make him react the way he does
If the lifeguard isn’t on duty, then it’s useful to have the ability to be your own lifeguard.
I wanted to say that I appreciate the moderation style options and authors being able to delete and ban for their posts. While we’re talking about what to change and what isn’t working, I’d like to weigh in on the side of that being a good set of features that should be kept. Raemon, you’ve mentioned those features are there to be used. I’ve never used the capability and I’m still glad it exists. (I can barely use it actually.) Since site wide moderators aren’t going to intervene everywhere quickly (which I don’t think they should or even can, moderators are heavily outnumbered) then I think letting people moderate their local piece is good.
If I ran into lots of negative feedback I didn’t think was helpful and it wasn’t getting moderated by me or the site admins, I’d just move my writing to a blog on a different website where I could control things. Possibly I’d set up crossposting like Zvi or Jefftk and then ignore the LessWrong comment section. If lots of people do that, then we get the diaspora effect from late LessWrong 1.0. Having people at least crossposting to LessWrong seems good to me, since I like tools like the agreement karma and the tag upvotes. Basically, the BATNA for a writer who doesn’t like LessWrong’s comment section is Wordpress or Substack. Some writers you’d rather go elsewhere obviously, but Said and Duncan’s top level posts seem mostly a good fit here.
I do have a question about norm setting I’m curious about. If Duncan had titled his post “Duncan’s Basics of Rationalist Discourse” would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?
Yeah I think this’d be much less cause for concern. (I haven’t checked whether the rest of the post has anything else that felt LW-wide-police-y about it, I’d maybe have wanted a slightly different opening paragraph or something)
I think Duncan also posts all his articles on his own website, is this correct?
In that case, would it be okay to replace the article on LW with a link to Duncan’s website? So that the articles stay there, the comments stay here, the page with comments links the article, but the article does not link the page with comments.
I am not suggesting to do this. I am asking that if Duncan (or anyone else) hypothetically at some moment decided for whatever reason that he is uncomfortable with his articles being on LW, whether doing this (moving the articles elsewhere and replacing them with the links towards the new place) would be acceptable for you? Like, whether this could be a policy “if you decide to move away from LW, this is our preferred way to do it”.
Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking.
Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)
Inspired by this post I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)
Obviously the exact ratio doesn’t have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.
I’m not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post.
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.
the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.
counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
I don’t see all comments as criticism. Many comments are of the building up variety! It’s that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times.
Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.
The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post.
i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one—the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!
https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor
there are much more then 3 comments from person there.
from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it’s dialog. and there are lot of unproductive examples for that in LW. and it’s quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.
but i find rules that prevent the best things from happening as bad in some way that i can’t explain clearly. something like, I’m here to try to go higher. if it’s impossible, then why bother?
I also think it’s VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i’m just right now taking part in counter-example to “would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.”
i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like… is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it’s very unfriendly in a way that i find hard to describe.
like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don’t count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.
Yeah this is the sort of solution I’m thinking of (although it sounds like you’re maybe making a more sweeping assumption than me?)
My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I’m worried about (for users that seem particularly prone to causing demon threads)
Complaints by whom? And why are these complaints significant?
Are you taking the stance that all or most of these complaints are valid, i.e. that the things being complained about are clearly bad (and not merely dispreferred by this or that individual LW member)?
(See also this recent comment, where I argue that at least one particular characterization of my commenting activity is just demonstrably inconsistent with reality.)
Here’s a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren’t on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can’t have particularly important types of conversations here.
I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)
I think there’s probably at least 5 more people who complained about you by name who I don’t think have particularly legible credibility beyond “being some LessWrong users.”
I’m thinking about my reply to “are the complaints valid tho?”. I have a different ontology here.
There are some problems with this as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.
I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I’m sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.
Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it’s worth putting in effort here to figure out if things could be better.
It is evidence that there is some sort of problem. It’s not clear evidence about what should be done about it, about what “better” means specifically. Instituting ways of not talking about the problem anymore doesn’t help with addressing it.
It didn’t seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.
If we speak precisely… in what way would they be the former without being the latter? Like, if I now think it’s more worth figuring out whether things could be better, presumably that’s because I now think it’s more likely that things could be better?
(I suppose I could also now think the amount-they-could-be-better, conditional on them being able to be better, is higher; but the probability that they could be better is unchanged. Or I could think that we’re currently acting under the assumption that things could be better, I now think that’s less likely so more worth figuring out whether the assumption is wrong. Neither seems like they fit in this case.)
Separately, I think my model of Said would say that he was not complaining, he was merely asking questions (perhaps to try to decide whether there was something to complain about, though “complain” has connotations there that my model of Said would object to).
So, if you think the mods are doing something that you think they shouldn’t be, you should probably feel free to say that (though I think there are better and worse ways to do so).
But if you think Said thinks the mods are doing something that Said thinks they shouldn’t be… idk, it feels against-the-spirit-of-Said to try to infer that from his comment? Like you’re doing the interpretive labor that he specifically wants people not to do.
My comment wasn’t well written, I shouldn’t have used the word “complaining” in reference to what Said was doing. To clarify:
As I see it, there are two separate claims:
That the complaints prove that Said has misbehaved (at least a little bit)
That the complaints increase the probability that Said has misbehaved
Said was just asking questions—but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1.
Jefftk seems to be speaking about claim 2. So, his comment doesn’t seem like a direct response to Said’s comment, although the point is still a relevant one.
(fyi I do plan to respond to this, although don’t know how satisfying it’ll be when I do)