The tech solution I’m currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe, he gets to comment more without limit.” But that’s fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”. The question here is “do you want the ‘real work’ of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can’t bother you and?” (which is what’s mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is “suddenly” in significant agreement about some “frame control” concept he’s never heard of. (I can’t find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella’s post. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I’m not sure if there’s a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)
So, I think the censorship policy you’re imagining is a fabricated option.
My current guess of actual next steps are “Said gets 3 comments per post per day” restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying “hey, we’ve made a bunch of changes, we’d like it if you came in and tried using the site again”, and hope that this time it actually works.
(Duncan might get a similar treatment, for fairly different reasons, although I’m more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)
a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”.
We already have a user-level personal ban feature! (Said doesn’t like it, but he can’t do anything about it.) Why isn’t the solution here just, “Users who don’t want to receive comments from Said ban him from their own posts”? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.) It’s true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that’s Elizabeth, and DirectedEvolution, and one other user).
I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe
This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn’t stop Said from posting a fourth comment; “to start” is not unconditional if it requires a human to approve the fourth comment.)
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account. Why is that? This seems like a question you should be able to answer.
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account.
Stipulating that votes on this comment are more than negligibly informative on this question… it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you’re counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but
1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
But we’ve had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don’t trust him to actually tell the difference in many edge cases.
We’ve spent a hundred+ person hours over the years thinking about how to limit Said’s damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
2. I did list “actually just encourage people to use the ban tool more” is an option. (DirectedEvolution didn’t even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn’t feel an obligation as a mod/site-admin to have a more open comment section)
3. I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
(I don’t expect that to really resolve your crux here but it seemed like it’s at least an improvement on the margin)
4. I think it’s plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don’t think this is the right call because I just think it’s… just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.
(This solution probably wouldn’t address my concerns about Duncan though)
If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
I am a little worried that this is a generalization that doesn’t line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I’m reluctant to suggest a lengthy evidence review, both because of the costs and because I’m somewhat uncertain of the benefits—if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say “actually Said isn’t annoying”, those authors are unlikely to find it convincing.)
In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that “Said can engage with open/curious/interpretative topics/posts in a productive way”, and should maybe try to figure out what was different that time.
I will probably build something that let’s people Opt Into More Said.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that’s currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.
There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider.
Excellent. I predict that Said wouldn’t be averse to voluntarily not commenting on “open/curious/cooperative” posts, or not commenting there in the kind of style that adherents of that culture dislike, so that “specifically banning Said” from that is an unnecessary caveat.
I did list “actually just encourage people to use the ban tool more” is an option. [...] If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
Well, I’m glad you’re telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said’s interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don’t like.
Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don’t regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that’s a threat to me and mine. A government that does that is not legitimate.
It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
So, usually when people make this kind of “hostile paraphrase” in an argument, I tend to take it in stride. I mostly regard it as “part of the game”: I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don’t tell people to be more charitable to me; I don’t ask them to pass my ideological Turing test; I just say, “That’s not what I meant,” and explain the idea again; I’m happy to do the extra work.
In this particular situation, I’m inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that “criticize without trying to figure out what the OP is about” is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said’s ideological Turing test?
I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
Right, so if someone complains about Said, point out that they’re free to strong-downvote him and that they’re free to ban him from their posts. That’s much less time-consuming than writing new code! (You’re welcome.)
If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style
Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don’t want on the website, but that’s not the same thing as decreeing that individuals must “integrat[e] the spirit-of-[your]-models into [their] commenting style”.) Was I mistaken about what your job is?
building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”
I am strongly opposed to this because I don’t think the proposed distinction cuts reality at the joints. (I’d be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)
We already let authors write their own moderation guidelines! It’s a blank text box! If someone happens to believe in this “cooperative vs. adversarial” false dichotomy, they can write about it in the text box! How is that not enough?
We already let authors write their own moderation guidelines! It’s a blank text box!
Because it’s a blank text box, it’s not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Also, moderation guidelines aren’t visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.
(I assume Said mostly uses GW, since he designed it.)
I’ve been busy, so hadn’t replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I’ve done that at least twice now in this thread, I’m trying to better but seems important for me to notice and pay attention to).
I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style” line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
i.e. this isn’t about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.
I expect you to still object to that for various reasons, and I think it’s reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
I’m still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it’s not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:
I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
This strikes me basically as a way to move the mod team’s role more into “setting good defaults” and less “setting the only way things work”. How much y’all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be?
Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.
(Or if this feels more like a Said and/or Duncan specific issue, make the options “Unlimited”, “Limited”, and “None/Banned” then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
There is definitely some term in the my / the mod team’s equation for “this user is providing a lot of valuable stuff that people want on the site”. But the high level call the moderation team is making is something like “maximize useful truths we’re figuring out”. Hearing about how many people are getting concrete value out of Said or Duncan’s comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don’t comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.
The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.
If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts.
Thank you for the complement!
With writing science commentary, my participation is contingent on there being a specific job to do (often, “dig up quotes from links and citations and provide context”) and a lively conversation. The units of work are bite-size. It’s easy to be useful and appreciated.
Writing posts is already relatively speaking not my strong suit. There’s no preselection on people being interested enough to drive a discussion, what makes a post “interesting” is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.
My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw—essentially failing to adopt the “referee” role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan’s posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of “danger” in responding to Duncan’s posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.
So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I’m not sure what other user you’re referring to besides Achmiz—it looks like there’s supposed to be another word between “about” and “and” in your first sentence, and between “about” and “could” in the last sentence of your second paragraph, but it’s not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author’s intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.
major Said defender Zack has written lots of well-regarded posts
I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I’m not claiming to be passing anyone’s ideological Turing test.)
The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn’t have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.
In my worldview, this is exactly how things are supposed to work. I didn’t have satisfactory replies to the critical comments. Of course that’s going to result in downvotes! Of course it made me a little bit sad that day! (By “conservation of expected feelings”: I would have felt a little bit happy if the post did well.) Of course I’m going to try not to write posts relevantly “like that” in the future!
I’ve been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don’t think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can’t quite permit myself to believe it, because it’s too uncharitable; I must not be understanding correctly.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 → Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement.
Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
YES. I think this is hugely important, and I think it’s a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they’re lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
There’s absolutely a middle ground. There are many times when I ask questions—let’s say of an academic author—where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it’s to ask questions to do things like:
Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
Ask about the points in the author’s argument I don’t fully understand, in case those turn out to be cruxes
Ask what they think about my counterargument, on the assumption that they’ve already thought about it and have a pretty good answer that I’m genuinely interested in hearing
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
And this attitude is particularly corrosive to feelings of trust, collaboration, “jamming together,” etc. … it’s like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn’t offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.
Which, yeah, that’s one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!
(I choose martial arts specifically because it’s a domain full of anti-epistemic garbage and claims that don’t pan out.)
But in practice, few people will participate in such a martial arts academy for long, and it’s not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.
You’re describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you’re right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can’t, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they’re an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren’t there in the first place. It’s definitely more challenging to jam with dissonant characters like that (especially if they’re dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it’s important to realize that the problem isn’t so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
Well, you mentioned the scenario as an illustration of a “particularly corrosive” attitude. It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy’s behavior is, how much of everyone’s time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.
Maybe “deeply dysfunctional” was going too far, but I don’t think it’s reasonable to call that “way overconfident/projection-y”. Nor does the difference between “deeply dysfunctional” and “moderately dysfunctional” matter for jimmy’s point.
votes
FYI, I’m inclined to upvote jimmy’s comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them). And your comment seems to be calling jimmy out inappropriately (as I’ve argued above), so I’m inclined to at least disagree-vote it.
“Let’s imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous” does not seem like a reasonable move to me.
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
But it’s not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they’d stand by themselves.
Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. (“The way to jam”?) But e.g. “the problem isn’t so much the difficulty as the inability to overcome the difficulty” seems… well, I’d say this is overstated too, but I do think it’s pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
So I don’t think it’s unreasonable that the parent got significantly upvoted, though I didn’t upvote it myself; and I don’t think it’s unreasonable that your correction didn’t, since it looks correct to me but like it’s not responding to the main point.
Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I’d need it spelled out in more detail.
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
FWIW, that is a claim I’m fully willing and able to justify. It’s hard to disclaim all the possible misinterpretations in a brief comment (e.g. “deeply” != “very”), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Yes, and that’s why I described the attitude as “dysfunctionally dissonant” (emphasis in original). It’s not a good way of challenging the instructors, and not the way I recommend behaving.
What I’m talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system’s combined dysfunction never becomes supercritical and instead decays towards productive cooperation.
it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
That’s certainly one possibility. But isn’t it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don’t see, and which justify the confidence level I display?
It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like “Can I pass his ITT”/”Can I point to a flaw in his argument that makes him stutter if not change his mind”/etc.
To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn’t appear to come out of nowhere, but I do believe I am able to justify what I’m saying very well and won’t hesitate to do so if anyone wants further explanation or sees something which doesn’t seem to fit. And hey, if it turns out I’m wrong about how well supported my perspective is, I promise not to be a poor sport about it.
jimmy above is exhibiting actually bad reasoning (à la representativeness)
In absence of an object level counterargument, this is textbook ad hominem. I won’t argue that there isn’t a place for that (or that it’s impossible that my reasoning is flawed), but I think it’s hard to argue that it isn’t premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn’t uncommon for some of it to be right to an extent, but it’s really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren’t well substantiated.
I see that you’ve deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I’m pushing back against some of the things you’re saying because I think that it’s important to do so, but I do not harbor any ill will towards you nor do I think what you said was “ridiculous”. I hope you come back.
Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.
I thought it was a reference to, among other things, this exchange where Said says one of Duncan’s Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you’re observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you’re correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like “spending down and or making a bet with a limited resource (maybe two specific resources of “trust in the mods” and “some groups of people’s willingness to put up with the site being optimized a way they think is wrong.”)
Despite that, I think it is the right call to limit Said significantly in some way, but I don’t think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
I don’t think we can make that many moderation calls on users this established that there [sic] this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by “that many” is greater than zero.
50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years
I don’t remember this. I feel like Aella’s post introduced the term?
A better example might be Circling, though I think Said might have had a point of it hadn’t been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on “what’s going on with Brent?” two years prior, as well as some other circumstances. We’d been talking about it internal at Lightcone/LessWrong during that time.
I think the term was getting used, but makes sense if you weren’t as involved in those conversations. (I just checked and there’s only one old internal lw-slack message about it from 2019, but it didn’t feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)
The tech solution I’m currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I’m leaning towards either “3 comments per post” or “3 comments per post per day”. (My ideal world, for Said, is something like “3 comments per post to start, but, if nothing controversial happens and he’s not ruining the vibe, he gets to comment more without limit.” But that’s fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).
I do have a high level goal of “users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so”. The question here is “do you want the ‘real work’ of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can’t bother you and?” (which is what’s mostly currently happening).
So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he’s already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is “suddenly” in significant agreement about some “frame control” concept he’s never heard of. (I can’t find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella’s post. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I’m not sure if there’s a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)
So, I think the censorship policy you’re imagining is a fabricated option.
My current guess of actual next steps are “Said gets 3 comments per post per day” restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying “hey, we’ve made a bunch of changes, we’d like it if you came in and tried using the site again”, and hope that this time it actually works.
(Duncan might get a similar treatment, for fairly different reasons, although I’m more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)
We already have a user-level personal ban feature! (Said doesn’t like it, but he can’t do anything about it.) Why isn’t the solution here just, “Users who don’t want to receive comments from Said ban him from their own posts”? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.
This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I’m unlikely to guess it; you’ll have to clarify.) It’s true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that’s Elizabeth, and DirectedEvolution, and one other user).
This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn’t stop Said from posting a fourth comment; “to start” is not unconditional if it requires a human to approve the fourth comment.)
More generally, as a long-time user of Less Wrong (original join date 26 February 2009, author of five Curated posts) and preceding community (first Overcoming Bias comment 22 December 2007, attendee of the first Overcoming Bias meetup on 21 February 2008), I do not want Said Achmiz to be a second-class citizen in my garden. If we have a user-level personal ban feature that anyone can use, I might or might not think that’s a good feature to have, but at least it’s a feature that everyone can use; it doesn’t arbitrarily single out a single user on a site-wide basis.
Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account. Why is that? This seems like a question you should be able to answer.
Stipulating that votes on this comment are more than negligibly informative on this question… it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you’re counting as not being taken into account, which seems exactly backwards.
Some other random notes (probably not maximally cruxy for you but
1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I’d be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.
But we’ve had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don’t trust him to actually tell the difference in many edge cases.
We’ve spent a hundred+ person hours over the years thinking about how to limit Said’s damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won’t continue to eat up more of our time.
2. I did list “actually just encourage people to use the ban tool more” is an option. (DirectedEvolution didn’t even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I’m open to that (my model of you thinks that’s worse).
(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn’t feel an obligation as a mod/site-admin to have a more open comment section)
3. I will probably build something that let’s people Opt Into More Said. I think it’s fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a “let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way.”
(I don’t expect that to really resolve your crux here but it seemed like it’s at least an improvement on the margin)
4. I think it’s plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don’t think this is the right call because I just think it’s… just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be “criticize without trying to figure out what the OP is about and what problems they’re trying to solve”.
I do think, for the case of Said, building out two high level normsets of “open/curious/cooperative” and “debate/adversarial collaboration/thicker-skin-required”, letting authors choose between them, and specifically banning Said from the former, is a viable option I’d consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.
(This solution probably wouldn’t address my concerns about Duncan though)
I am a little worried that this is a generalization that doesn’t line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I’m reluctant to suggest a lengthy evidence review, both because of the costs and because I’m somewhat uncertain of the benefits—if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say “actually Said isn’t annoying”, those authors are unlikely to find it convincing.)
In particular, I keep thinking about this comment (noting that I might be updating too much on one example). I think we have evidence that “Said can engage with open/curious/interpretative topics/posts in a productive way”, and should maybe try to figure out what was different that time.
I think in the sense of the general garden-style conflict (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that’s currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.
There should be for a start just two options, Athenian Garden and Socratic Garden, so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.
Excellent. I predict that Said wouldn’t be averse to voluntarily not commenting on “open/curious/cooperative” posts, or not commenting there in the kind of style that adherents of that culture dislike, so that “specifically banning Said” from that is an unnecessary caveat.
Well, I’m glad you’re telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said’s interaction style (of just asking people things, instead of falsely imagining that you can model them).
Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don’t like.
Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don’t regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that’s a threat to me and mine. A government that does that is not legitimate.
So, usually when people make this kind of “hostile paraphrase” in an argument, I tend to take it in stride. I mostly regard it as “part of the game”: I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don’t tell people to be more charitable to me; I don’t ask them to pass my ideological Turing test; I just say, “That’s not what I meant,” and explain the idea again; I’m happy to do the extra work.
In this particular situation, I’m inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that “criticize without trying to figure out what the OP is about” is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said’s ideological Turing test?
Right, so if someone complains about Said, point out that they’re free to strong-downvote him and that they’re free to ban him from their posts. That’s much less time-consuming than writing new code! (You’re welcome.)
Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don’t want on the website, but that’s not the same thing as decreeing that individuals must “integrat[e] the spirit-of-[your]-models into [their] commenting style”.) Was I mistaken about what your job is?
I am strongly opposed to this because I don’t think the proposed distinction cuts reality at the joints. (I’d be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)
We already let authors write their own moderation guidelines! It’s a blank text box! If someone happens to believe in this “cooperative vs. adversarial” false dichotomy, they can write about it in the text box! How is that not enough?
Because it’s a blank text box, it’s not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.
With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.
Also, moderation guidelines aren’t visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.
(I assume Said mostly uses GW, since he designed it.)
I’ve been busy, so hadn’t replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I’ve done that at least twice now in this thread, I’m trying to better but seems important for me to notice and pay attention to).
I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style” line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:
i.e. this isn’t about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.
I expect you to still object to that for various reasons, and I think it’s reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)
FYI, my response to this is is waiting for an answer to my question in the first paragraph of this comment.
I’m still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it’s not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:
This strikes me basically as a way to move the mod team’s role more into “setting good defaults” and less “setting the only way things work”. How much y’all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.
How technically troublesome would an allow list be?
Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.
(Or if this feels more like a Said and/or Duncan specific issue, make the options “Unlimited”, “Limited”, and “None/Banned” then default to everyone at Unlimited except for Said and/or Duncan at Limited.)
My prediction is that those users are primarily upvoting it for what it’s saying about Duncan rather than about Said.
To spell out what evidence I’m looking at:
There is definitely some term in the my / the mod team’s equation for “this user is providing a lot of valuable stuff that people want on the site”. But the high level call the moderation team is making is something like “maximize useful truths we’re figuring out”. Hearing about how many people are getting concrete value out of Said or Duncan’s comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don’t comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.
I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors.
The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.
If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here
Thank you for the complement!
With writing science commentary, my participation is contingent on there being a specific job to do (often, “dig up quotes from links and citations and provide context”) and a lively conversation. The units of work are bite-size. It’s easy to be useful and appreciated.
Writing posts is already relatively speaking not my strong suit. There’s no preselection on people being interested enough to drive a discussion, what makes a post “interesting” is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness.
My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw—essentially failing to adopt the “referee” role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan’s posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of “danger” in responding to Duncan’s posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.
So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.
I’m not sure what other user you’re referring to besides Achmiz—it looks like there’s supposed to be another word between “about” and “and” in your first sentence, and between “about” and “could” in the last sentence of your second paragraph, but it’s not rendering correctly in my browser? Weird.
Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author’s intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.
I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I’m not claiming to be passing anyone’s ideological Turing test.)
The other month I published a post that I was feeling pretty good about, quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn’t have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.
In my worldview, this is exactly how things are supposed to work. I didn’t have satisfactory replies to the critical comments. Of course that’s going to result in downvotes! Of course it made me a little bit sad that day! (By “conservation of expected feelings”: I would have felt a little bit happy if the post did well.) Of course I’m going to try not to write posts relevantly “like that” in the future!
I’ve been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don’t think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can’t quite permit myself to believe it, because it’s too uncharitable; I must not be understanding correctly.
Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.
I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to.
To use a trivial example: Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught. And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 → Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.
So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement.
Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard.
You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.
*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.
YES. I think this is hugely important, and I think it’s a pretty good definition of the difference between a confused person and a crank.
Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they’re lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.
Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they’re addressing. They already expect the author they’re questioning is fundamentally confused, and so they don’t waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank’s attention, since they’re obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.
There’s absolutely a middle ground. There are many times when I ask questions—let’s say of an academic author—where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it’s to ask questions to do things like:
Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
Ask about the points in the author’s argument I don’t fully understand, in case those turn out to be cruxes
Ask what they think about my counterargument, on the assumption that they’ve already thought about it and have a pretty good answer that I’m genuinely interested in hearing
This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers.
And this attitude is particularly corrosive to feelings of trust, collaboration, “jamming together,” etc. … it’s like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn’t offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.
Which, yeah, that’s one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!
(I choose martial arts specifically because it’s a domain full of anti-epistemic garbage and claims that don’t pan out.)
But in practice, few people will participate in such a martial arts academy for long, and it’s not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.
You’re describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.
The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you’re right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can’t, and no one can, then he might have a point, and the gym gets to learn something new.
If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they’re an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren’t there in the first place. It’s definitely more challenging to jam with dissonant characters like that (especially if they’re dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it’s important to realize that the problem isn’t so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
Strong disagree that I’m describing a deeply dysfunctional gym; I barely described the gym at all and it’s way overconfident/projection-y to extrapolate “deeply dysfunctional” from what I said.
There’s a difference between “hey, I want to understand the underpinnings of this” and the thing I described, which is hostile to the point of “why are you even here, then?”
Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.
Well, you mentioned the scenario as an illustration of a “particularly corrosive” attitude. It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy’s behavior is, how much of everyone’s time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.
Maybe “deeply dysfunctional” was going too far, but I don’t think it’s reasonable to call that “way overconfident/projection-y”. Nor does the difference between “deeply dysfunctional” and “moderately dysfunctional” matter for jimmy’s point.
FYI, I’m inclined to upvote jimmy’s comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them). And your comment seems to be calling jimmy out inappropriately (as I’ve argued above), so I’m inclined to at least disagree-vote it.
“Let’s imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous” does not seem like a reasonable move to me.
Separately:
https://www.lesswrong.com/posts/WsvpkCekuxYSkwsuG/overconfidence-is-deceit
I think my feeling here is:
Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
But it’s not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they’d stand by themselves.
Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. (“The way to jam”?) But e.g. “the problem isn’t so much the difficulty as the inability to overcome the difficulty” seems… well, I’d say this is overstated too, but I do think it’s pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
So I don’t think it’s unreasonable that the parent got significantly upvoted, though I didn’t upvote it myself; and I don’t think it’s unreasonable that your correction didn’t, since it looks correct to me but like it’s not responding to the main point.
Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I’d need it spelled out in more detail.
FWIW, that is a claim I’m fully willing and able to justify. It’s hard to disclaim all the possible misinterpretations in a brief comment (e.g. “deeply” != “very”), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
Yes, and that’s why I described the attitude as “dysfunctionally dissonant” (emphasis in original). It’s not a good way of challenging the instructors, and not the way I recommend behaving.
What I’m talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system’s combined dysfunction never becomes supercritical and instead decays towards productive cooperation.
That’s certainly one possibility. But isn’t it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don’t see, and which justify the confidence level I display?
It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like “Can I pass his ITT”/”Can I point to a flaw in his argument that makes him stutter if not change his mind”/etc.
To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn’t appear to come out of nowhere, but I do believe I am able to justify what I’m saying very well and won’t hesitate to do so if anyone wants further explanation or sees something which doesn’t seem to fit. And hey, if it turns out I’m wrong about how well supported my perspective is, I promise not to be a poor sport about it.
In absence of an object level counterargument, this is textbook ad hominem. I won’t argue that there isn’t a place for that (or that it’s impossible that my reasoning is flawed), but I think it’s hard to argue that it isn’t premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn’t uncommon for some of it to be right to an extent, but it’s really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren’t well substantiated.
I see that you’ve deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I’m pushing back against some of the things you’re saying because I think that it’s important to do so, but I do not harbor any ill will towards you nor do I think what you said was “ridiculous”. I hope you come back.
I thought it was a reference to, among other things, this exchange where Said says one of Duncan’s Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you’re observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you’re correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]
I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like “spending down and or making a bet with a limited resource (maybe two specific resources of “trust in the mods” and “some groups of people’s willingness to put up with the site being optimized a way they think is wrong.”)
Despite that, I think it is the right call to limit Said significantly in some way, but I don’t think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.
Indeed. I would encourage you to ask yourself whether the number referred to by “that many” is greater than zero.
I don’t remember this. I feel like Aella’s post introduced the term?
A better example might be Circling, though I think Said might have had a point of it hadn’t been carefully scrutinized, a lot of people had just been doing it.
Frame control was a pretty central topic on “what’s going on with Brent?” two years prior, as well as some other circumstances. We’d been talking about it internal at Lightcone/LessWrong during that time.
Hmm, yeah, I can see that. Perhaps just not under that name.
I think the term was getting used, but makes sense if you weren’t as involved in those conversations. (I just checked and there’s only one old internal lw-slack message about it from 2019, but it didn’t feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)