I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such.
I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.
Here’s a quick overview of how I think about Said moderation:
Re: Recent Duncan Conflict.
I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take on some object level stuff. I have a bit more to say but not much.
I’d be a lot less wary of the previous pattern if I felt like Said was also contributing significantly more value to LessWrong. [Edit: I do, to be clear, think Said has contributed significant value, both in terms of keeping the spirit of the sequences alive in the world ala readthesequences.com, and through being a voice with a relatively rare (these days) perspective that keeps us honest in important ways. But I think the costs are, in fact, really high, and I think the object level value isn’t enough to fully counterbalance it]
Prior discussion and warnings.
We’ve had numerous discussions with Said about this (I think we’ve easily spent 100+ hours of moderator-time on it, and probably more like 200), including an explicit moderation warning.
Few recent problematic pattern instances.
That all said, prior to this ~month’s conflict with Duncan, I don’t have a confident belief that Said has recently strongly embodied the pattern I’m worried about. I think it was more common ~5 years ago. I cut Said some slack for the convo with Duncan because I think Duncan is kind of frustrating to argue with.
THAT said, I think it’s crept up at least somewhat occasionally in the past 3 years, and having to evaluate whether it’s creeping up to an unacceptable level is fairly costly.
THAT THAT said, I do appreciate that the first time we gave him an explicit moderation notice, I don’t think we had any problems for ~3 years afterwards.
Strong(ish) statement of intent
Said’s made a number of comments that make me think he would still be doing a pattern I consider problematic if the opportunity arose. I think he’ll follow the letter of the law if we give it to him, but it’s difficult to specify a letter-of-the-law that does the thing I care about.
A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)
For now, after a bunch of discussion with other moderators, reading the thread-so-far, and talking with various advisors – my current call is giving Said a rate limit of 3-comments-per-post-per-week. See this post on the general philosophy of rate limiting as a moderation tool we’re experimenting with. I think there’s a decent chance we’ll ship some new features soon that make this actually a bit more lenient, but don’t want to promise that at the moment.
I am not very confident in this call, and am open to more counterarguments here, from Said or others. I’ll talk more about some of the reasoning here at the end of this comment. But I want to start by laying out some more background reasoning for the entire moderation decision.
In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.
(Note: one counterproposal I’ve seen is to develop a rate-limit based entirely on karma rather than moderator judgment, and that it is better to do this than to have moderators make individual judgment calls about specific users. I do think this idea has merit, although it’s hard to build. I have more to say about it at the end)
The usual pattern of Said’s comments as I experience them has been (and I think this would be reasonably straightforward to verify):
Said makes a highly upvoted comment asking a question, usually implicitly pointing out something that is unclear to many in the post
Author makes a reasonably highly upvoted reply
Said says that the explanation was basically completely useless to him, this often gets some upvotes, but drastically less than the top-level question
Author tries to clarify some more, this gets much fewer upvotes than the original reply
Said expresses more confusion, this usually gets very few upvotes
More explanations from the author, almost no upvotes
Said expresses more confusion, often being downvoted and the author and others expressing frustration
I think the most central of this is in this thread on circling, where AFAICT Said asked for examples of some situations where social manipulation is “good.” Qiaochu and Sarah Constantin offer some examples. Said responds to both of them by questioning their examples and doubting their experience in a way that is pretty frustrating to respond to (and in the Sarah case seemed to me like a central example of Said missing the point, and the evo-psych argument not even making sense in context, which makes me distrust his taste on these matters). [1, 2]
I don’t actually remember more examples of that pattern offhand. I might be persuaded that I overupdated on some early examples. But after thinking a few days, I think a cruxy piece of evidence on how I think it makes sense to moderate Said is this comment from ~3 years ago:
There is always an obligation by any author to respond to anyone’s comment along these lines*. If no response is provided to (what ought rightly to be) simple requests for clarification (such as requests to, at least roughly, define or explain an ambiguous or questionable term, or requests for examples of some purported phenomenon), the author should be interpreted as ignorant. These are not artifacts of my particular commenting style, nor are they unfortunate-but-erroneous implications—they are normatively correct general principles.
*where I think “these lines” means “asking for examples”, “asking people to define terms,” etc.
For completeness, Said later elaborates:
Where does that obligation come from?
I should clarify, first of all, that the obligation by the author to respond to the comment is not legalistically specific. By this I mean that it can be satisfied in any of a number of ways; a literal reply-to-comment is just one of them. Others include:
Mentioning the comment in a subsequent post (“In the comments on yesterday’s post, reader so-and-so asked such-and-such a question. And I now reply thus: …”).
Linking to one’s post or comment elsewhere which constitutes an answer to the question.
Someone else linking to a post or comment elsewhere (by the OP) which constitutes an answer to the question.
Someone else answering the question in the OP’s stead (and the OP giving some indication that this answer is endorsed).
Answering an identical, or very similar, question elsewhere (and someone providing a link or citation).
In short, I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe.
Habryka and Said discussed it at length at the time.
I want to reiterate that I think asking for examples is fine (and would say the same thing for questions like “what do you mean by ‘spirituality’?” or whatnot). I agree that a) authors generally should try to provide examples in the first place, b) if they don’t respond to questions about examples, that’s bayesian evidence about whether their idea will ground out into something real. I’m fairly happy with clone of saturn’s variation on Said’s statement, that if the author can’t provide examples, “the post should be regarded as less trustworthy” (as opposed to “author should be interpreted as ignorant”), and gwern’s note that if they can’t, they should forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
The thing I object fairly strongly to is “there is an obligation on the part of the author to respond.”
I definitely don’t think there’s a social obligation, and I don’t think most LessWrongers think that. (I’m not sure if Said meant to imply that). Insofar as he means there’s a bayesian obligation-in-the-laws-of-observation/inference, I weakly agree but think he overstates it: there’s a lot of reasons an author might not respond (“belief that a given conversation won’t be productive,” “volume of such comments,” “trying to have a 202 conversation and not being interested in 101 objections,” and simple opportunity cost).
From a practical ‘things that the LessWrong culture should socially encourage people to do’, I liked Vladimir’s point that:
My guess is that people should be rewardedfor ignoring criticism they want to ignore, it should be convenient for them to do so. [...] This way authors are less motivated to take steps that discourage criticism (including steps such as not writing things). Criticism should remain convenient, not costly, and directly associated with the criticized thing (instead of getting pushed to be published elsewhere).
i.e. I want there to be good criticism on LW, and think that people feeling free to ignore criticism encourages more good criticism, in part by encouraging more posts and engagement.
It’s been a few years and I don’t know that Said still endorses the obligation phrasing, but much of my objection to Said’s individual commenting stylistic choices has a lot to do with reinforcing this feeling of obligation. I also think (less confidently) that they get an impression that Said thinks if an author hasn’t answered a question to his satisfaction (as an example of a reasonable median LW user), they should feel an [social] obligation to succeed at that.
Whether he intends this or not, I think it’s an impression that comes across, and which exerts social pressure, and I think this has a significant negative effect on the site.
I’m a bit confused about how to think about “prescribed norms” vs “good ideas that get selected on organically.” In a previous post Vladmir_Nesov argues that prescribing norms generally doesn’t make sense. Habryka had a similar take yesterday when I spoke with him. I’m not sure I agree (and some of my previous language here has probably assumed a somewhat more prescriptivist/top-down approach to moderating LessWrong that I may end up disendorsing after chatting more with Habryka)
But even in a more organic approach to moderation, I, Habryka and Ruby think it’s pretty reasonable for moderators to take action to prevent Said from implying that there’s some kind of norm here and exerting pressure around it on other people’s comment sections, when, AFAICT, there is no consensus of such a norm. I predict a majority of LessWrong members would not agree with that norm, either on normative-Bayesian terms nor consequentialist social-norm-design terms. (To be clear I think many people just haven’t thought about it at all, but expect them to at least weakly disagree when exposed to the arguments. “What is the actual collective endorsed position of the LW commentariat” is somewhat cruxy for me here)
Rate-limit decision reasoning
If this was our first (or second or third) argument with Said over this, I’d think stating this clearly and giving him a warning would be a reasonable next action. Given that we’ve been intermittently been arguing about this for 5 years, spending a hundred+ hours of mod time discussing it with him, it feels more reasonable to move to an ultimatum of “somehow, Said needs to stop exerting this pressure in other people’s comment threads, or moderators will take some kind of significant action to either limit the damage or impose a tax on it.”
If we were limited to our existing moderator tools, I would think it reasonable to ban him. But we are in the middle of setting up a variety of rate limiting tools to generally give mods more flexibility, and avoid being heavier-handed than we need to be.
I’m fairly open to a variety of options here. FWIW, I am interested in what Said actually prefers here. (I expect it is not a very fun conversation to be asked by the people-in-power “which way of constraining you from doing the thing you think is right seems least-bad to you?”, but, insofar as Said or others have an opinion on that I am interested)
I am interested in building a automated tool that detects demon threads and rate limits people based on voting patterns.. I most likely want to try to build such a tool regardless of what call we make on Said, and if I had a working version of such a tool I might be pretty satisfied with using it instead. My primary cruxes are
a) I think it’s a lot harder to build and I’m not sure we can succeed, b) I do just think it’s okay for moderators to make judgment calls about individual users based on longterm trends. That’s sort of what mods are for. (I do think for established users it’s important for this process to be fairly costly and subjected to public scrutiny)
But for now, after chatting with Oli and Ruby and Robert, I’m implementing the 3-comments-per-post-per-week rule for Said. If we end up having time to build/validate an organic karma-based rate limit that solves the problem I’m worried about here, I might switch to that. Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be god to ship soon include:
There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)
Some reasons for this-specific-rate-limit rather than alternatives are:
3 comments within a week is enough for an initial back-and-forth where Said asks questions or makes a critique, the author responds, Said responds-to-the-response. (i.e. allowing the 4 layers of intellectual conversation, and getting the parts of Said comments that most people agree are valuable)
It caps the conversation out before it can spiral into unproductive escalatory thread.
It signals culturally that the problem here isn’t about initial requests for examples or criticisms, it’s about the pattern that tends to play out deeper in threads. I think it’s useful for this to be legible both to authors engaging with Said, and other comments inferring site norms (i.e. some amount of Socrates is good, too much can cause problems)
If 3 comments isn’t enough to fully resolve a conversation, it’s still possible to follow up eventually.
Said can still write top level posts arguing for norms that he thinks would be better, or arguing about specific posts that he thinks are problematic.
That all said, the idea of using rate-limits as a mod-tool is pretty new, I’m not actually sure how it’ll play out. Again, I’m open to alternatives. (And again, see this post for more thoughts on rate limiting)
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said’s commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it’s permanent, it has a very rubber room vibe to it, where you don’t outright ban someone but continually humiliate them if they keep coming by and wish they’ll eventually get the hint.
A background model I want to put out here: two frames that feel relevant to me here are “harm minimization” and “taxing”. I think the behavior Said does has unacceptably large costs in aggregate (and, perhaps to remind/clarify, I think a similar-in-some-ways set of behaviors I’ve seen Duncan do also would have unacceptably large costs in aggregate).
And the three solutions I’d consider here, at some level of abstraction, are:
So-and-so agrees to stop doing the behavior (harder when the behavior is subtle and multifaceted, but, doable in principle)
Moderators restrict the user such that they can’t do the behavior to unacceptable degrees
Moderators tax the behavior such that doing-too-much-of-it is harder overall (but, it’s still something of the user’s choice if they want to do more of it and pay more tax).
All three options seem reasonable to me apriori, it’s mostly a question of “is there a good way to implement them?”. The current rate-limit-proposal for Said is mostly option 2. All else being equal I’d probably prefer option 3, but the options I can think of seem harder to implement and dev-time for this sort of thing is not unlimited.
Quick update for now: @Said Achmiz’s rate limit has expired, and I don’t plan to revisit applying-it-again unless a problem comes up.
I do feel like there’s some important stuff left unresolved here. @Zack_M_Davis’s comment on this other post asks some questions that seem worth answering.
I’d hoped to write up something longer this week but was fairly busy, and it seemed better to explicitly acknowledge it. For the immediate future I think improving on the auto-rate-limits and some other systemic stuff seems more important that arguing or clarifying the particular points here.
A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)
It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter “community notes” being attached to relevant comments that says something like “There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don’t feel he will productive to engage with. Discussion here”
Yeah I did list that as one of my options I’d consider in the previous announcement.
A problem I anticipate is that it’s some combination of ineffective, and also in some ways a harsher punishment. But if Said actively preferred some version of this solution I wouldn’t be opposed to doing it instead of rate-limiting.
Forgive me for making what may be an obvious suggestion which you’ve dismissed for some good reason, but… is there, actually, some reason why you can’t attach such a note to all comments? (UI-wise, perhaps as a note above the comment form, or something?) There isn’t an obligation, in terms of either the site rules or the community norms as the moderators have defined them, to respond to any comment, is there? (Perhaps with the exception of comments written by moderators…? Or maybe not even those?)
That is, it seems to me that the concern here can be characterized as a question of communicating forum norms to new participants. Can it not be treated as such? (It’s surely not unreasonable to want community members to refrain from actively interfering with the process of communicating rules and norms to newcomers, such as by lying to them about what those rules/norms are, or some such… but the problem, as such, is one which should be approached directly, by means of centralized action, no?)
I think it could be quite nice to give new users information about what site norms are and give a suggested spirit in which to engage with comments.
(Though I’m sure there’s lots of things it’d be quite nice to tell new users about the spirit of the site, but there’s of course bandwidth limitations on how much they’ll read, so just because it’s an improvement doesn’t mean it’s worth doing.)
If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?
(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment:
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don’t involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines.
If you have a sign in a space that says “don’t scream at people” but then lots of people do actually scream at you in that room, this doesn’t actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
My guess is you will respond to this with some statement of the form “but I have said many times that I do not think the norms are such that you have an obligation to respond”, but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
I can also imagine you responding to this with “but I can’t possible create an obligation to respond, the only people who can do that are the moderators”, which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.
I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes.
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.
But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.
So, you say:
I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience.
If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.
The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:
“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”
(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don’t think I am saying particularly complicated things, and I think I’ve communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we’ll continue to take some moderator actions until things look better by our models. I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.
In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.
I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.
You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I’ve been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?
(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don’t have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn’t helping and isn’t being asked of them.
If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it’s often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren’t aware what you did was a crime.
The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it’s often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.
I’ve now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I’m declaring that we’ll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said’s comments in the Duncan/Said conflict count as a triggering instance).
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it’s not, I don’t really know what to do about that.
No. This is still oversimplifying the issue, which I specifically disclaimed.
Alright, fair enough, so then…
The problem is implicit enforcement of norms.
… but then my next question is:
What the heck is “implicit enforcement of norms”??
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well
To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?
You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.
So, questions:
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?
If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?
A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here “self” refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can’t unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.
Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.
So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)
So there is a norm of responding to criticism. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)
That’s fair, but I predict that the central moderators’ complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.
If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.
(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)
Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.
occasionally reaffirm or remind people that X is not mandatory
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
a case of catering to utility monsters [...] incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
(as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly)
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
I wholly agree.
So it’s things like adopting lsusr’s suggestion to prefer statements to questions.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This might be a point of contention, but honestly, I don’t really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don’t have the time and patience to do this.
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category.
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
In short, if someone perceives [...] receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
The other day I decided to talk to the other section about what happens when you don’t understand what is going on. We had been chatting about something or other, and everyone seemed in a relaxed frame of mind, so I said, “You know, there’s something I’m curious about, and I wonder if you’d tell me.” They said, “What?” I said, “What do you think, what goes through your mind, when the teacher asks you a question and you don’t know the answer?”
It was a bombshell. Instantly a paralyzed silence fell on the room. Everyone stared at me with what I have learned to recognize as a tense expression. For a long time there wasn’t a sound. Finally Ben, who is bolder than most, broke the tension, and also answered my question, by saying in a loud voice, “Gulp!”
He spoke for everyone. They all began to clamor, and all said the same thing, that when the teacher asked them a question and they didn’t know the answer they were scared half to death.
I was flabbergasted—to find this in a school which people think of as progressive; which does its best not to put pressure on little children; which does not give marks in the lower grades; which tries to keep children from feeling that they’re in some kind of race. I asked them why they felt gulpish. They said they were afraid of failing, afraid of being kept back, afraid of being called stupid, afraid of feeling themselves stupid.
Stupid. Why is it such a deadly insult to these children, almost the worst thing they can think of to call each other? Where do they learn this? Even in the kindest and gentlest of schools, children are afraid, many of them a great deal of the time, some of them almost all the time. This is a hard fact of life to deal with. What can we do about it?
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
Eliezer himself doesn’t say “Name three examples” every single time
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
In some places on the internet, trolling is or has been a major problem.
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
I would appreciate someone else jumping in and explaining those models
I’m super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review of Order Without Law seems relevant. (And the book itself moreso, but that’s less linkable.)
Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)
Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...
I do have something in mind, but I apparently can’t write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that’s unlikely to be helpful. I don’t have any specific sections in mind.
(I think I’m unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)
so what does this model predict about just like the average dinner party or college class
Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).
College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space …
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)
Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)
Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
One last reply:
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.
But it sure isn’t the students!
(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)
The plot point of many high school movies is often about what is and isn’t acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.
Oh, the central latent variable in my uncertainty here is “is anyone willing to do this?” not “is anyone capable of this?”. My honest guess is the answer to that is “no” because this kind of conversation really doesn’t seem fun, and we are 7 levels deep into a 400 comment post.
My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don’t currently expect that to happen.
You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?
No, it boils down to “we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that’s not enough, then I guess that’s life and we’ll move on”.
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn’t involve many additional hours of effort.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate
I do not think you can deny the effort
I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you.
I agree that I personally haven’t put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?
Don’t you think that’s an odd state of affairs, to put it mildly?
The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.
You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.
Does this seem to you like a remotely reasonable way to have rules?
These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don’t think this counts as being defined in “long, branching, deeply nested comment threads about specific moderation decisions”. I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward.
I don’t know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.
The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)
And the rest pretty clearly suggests that there isn’t a clearly defined rule here.
The mod note from 5 years ago seems to me to be very clearly not defining any rules.
Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?
(What is the correct answer?)
How many of their answers would even match one another?
As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]
Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?
That aside, I have questions about this rate limit:
Does it apply to all posts of any kind, written by anyone? More specifically:
Does it apply to both personal and frontpage posts?
Does it apply to posts written by moderators? Posts written about me (or specifically addressing me)? Posts written by moderators about me?
Does it apply to this post? (I assume that it must not, since you mention that you’d like me to make a case that so-and-so, you say “I am interested in what Said actually prefers here”, etc., but just want to confirm this)EDIT: See below
Does it apply to “open thread” type posts (where the post itself is just a “container”, so to speak, and entirely different conversations may be happening under different top-level comments)?
Does it apply to my own posts? (That would be very strange, of course, but it wouldn’t be the strangest edge case that’s ever been left unhandled in a feature implementation, so seems worth checking…)
Does it apply retroactively to existing posts (including very old posts), or only new posts going forward?
Is there any way for a post author to disable this rate limit, or opt out of it?
Does the rate limit reset at a specific time each week, or is there simply a check for whether 3 posts have been written in the period starting one week before current time?
Is there any rate limit on editing comments, or only posting new ones? (It is presumably not the intent to have the rate limit triggered by fixing a typo, for instance…)
Is there a way for me to see the status of the rate limit prior to posting, or do I only find out whether the limit’s active when I try to post a comment and get an error?
Is there any UI cue to inform readers or other commenters (including a post’s author) that I can’t reply to a comment of theirs, e.g., due to the rate limit?
ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.
ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.
Aww christ I am very sorry about this. I had planned to ship the “posts can be manually overridden to ignore rate limiting” feature first thing this morning and apply it to this post, but I forgot that you’d still have made some comments less than a week ago which would block you for awhile. I agree that was a really terrible experience and I should have noticed it.
The feature is getting deployed now and will probably be live within a half hour.
For now, I’m manually applying the “ignore rate limit” flag to posts that seem relevant. (I’ll likely do a migration backfill on all posts by admins that are tagged “Site Meta”. I haven’t made a call yet about Open Threads)
I think some of your questions are answered in the previous comment:
Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be good to ship soon include:
[ETA: should be live soon] There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)
I’ll write a more thorough response after we’ve finished deploying the “ignoreRateLimits flag for posts” PR.
Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?
Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.
I think my actual crux “somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don’t think they’ll be productive to engage with.”
I believe “Said’s commenting style actively pushes against this in a norm-enforcing-feeling way”, but, as noted in the post, I’m still kind of confused about that (and I’ll say explicitly here: I am still not sure I’ve named the exact problem). I said a whole lot of words about various problems and caveats and how they fit together and I don’t think you can simplify it down to “the problem is X”. I said at the end, a major crux is “Said can adhere to the spirit of ‘“don’t imply people have an obligation to engage with your comments’,” where “spirit” is doing some important work of indicating the problem is fuzzy.
We’ve given you a ton of feedback about this over 5-6 years. I’m happy to talk or answer questions for a couple more days if the questions look like they’re aimed at ‘actually figure out how to comply with the spirit of the request’, but not more discussion of ‘is there a problem here from the moderator’s perspective?’.
I understand (and respect) that you think the moderators are wrong in several deep ways here, and I do honestly think it’s good/better for you to stick around with a generator of thoughts and criticism that’s somewhat uncorrelated with the site admin judgment” (but not free-reign to rehash it out in subtle conflict in other people’s comment sections)
I’m open (in the longterm) to arguments about whether our entire moderation policy is flawed, but that’s outside the scope of this moderation decision and you should argue about that in top-level posts and/or in posts by Zack/etc if it’s important to you)[random note that is probably implied but I want to make explicit: “enforcing standards that the LW community hasn’t collectively opted into in other people’s threads” is also essentially the criticism I’d make of many past comments of Duncans, although he goes about it in a pretty different way]
Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.
Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
Or, take the links. One of them is clearly meant to be an example of the thing you described (and which I quoted). The others… don’t seem to be.[2] Are they just examples of things where you disagree with me? Again, fine and well, but is “being (allegedly) wrong about some non-obvious philosophical point” a moderation-worthy offense…? How do these other links fit into a description of what problem you’re solving?
And, perhaps just as importantly… how does any of this fit into… well, anything that has happened recently? All of your links are to discussions that took place three years ago. What is the connection of any of that to recent events? Are you suggesting that I have recently written comments that would give people the impression that Less Wrong has a social norm that imputes on post authors an obligation to respond to comments on their posts?
I ask these things not because I want to persuade you that there isn’t a problem, per se (I think there are many problems but of course my opinion differs from yours about what they are)—but, rather, because I can hardly comply with the rules, either in letter or in spirit or in any other way, when I don’t know what the rules are. From my perspective, what I seem to see the mods doing is the equivalent of the police stopping a person who’s walking down the street, saying “we’re taking you in for speeding”, and, in response to the confused citizen’s protests, explaining that he got a speeding ticket three years ago, and now they’re arresting him for exceeding the speed limit. Is this a long-delayed punishment? Is there a more recent offense? Is there some other reason for the arrest? Or what?
I think my actual crux “somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don’t think they’ll be productive to engage with.”
I think that people should feel comfortable ignoring and/or downvoting anyone’s comments if they don’t think engagement will be productive! Certainly I should not be any sort of exception to this. (Why in the world would I be? Of course you should engage only if you have some expectation that engaging will be productive, and not otherwise!)
If I write a comment and you think it is a bad comment (useless, obviously wrong, etc.), by all means downvote and ignore. Why not? And if I write another comment that says “you have an obligation to reply!”—I wouldn’t say that, because I don’t think that, but let’s say that I did—downvote and ignore that comment, too! Do this no matter who the commenter is!
Anyhow, if the problem really is essentially as I’ve summarized it, plus or minus some nuances and elaborations, then:
I really don’t see what any recent event have to do with anything, or how the rate limit solves it, or… really, this entire situation perplexes me, from that perspective. But,
If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)
Actually, you somewhat misconstrue the comment, by taking it out of context. That’s perhaps not too important, but worth noting. In any case, it’s a comment I wrote three years ago, in the middle of a long discussion, and as part of a longer and offhandedly-written description, spread over a number of comments, of my view—and which, moreover, takes its phrasing directly from the comment it was a reply to. These are hardly ideal conditions for expressing nuances of meaning. My view is that, when writing comments like this in the middle of a long discussion, it is neither necessary nor desirable to agonize over whether the phrasing and formulation is ideal, because anyone who disagrees or misunderstands can just reply to indicate that, and the confusion or disagreement can be hammered out in the replies. (And this is largely what happened in the given case.[3])
In particular, I can’t help but note that you link to a sub-thread which begins with me saying “This comment is a tangent, and I haven’t decided yet if it’s relevant to my main points or just incidental—”, i.e., where I pretty clearly signal that engagement isn’t necessarily critical, as far as the main discussion goes.
Perhaps you missed it, but I did write a comment in that discussion where I very explicitly wrote that “I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe”. Was that comment, despite my efforts, somehow unclear? That’s possible! These things happen. But is that a moderation-worth offense…?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don’t have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn’t helping and isn’t being asked of them.
If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it’s often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren’t aware what you did was a crime.
The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do
Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).
Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?
… and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts
I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some falsely-claimed norm. But for me to enforce anything seems impossible as a purely technical matter.)
If you did so, I think that behavior ought to be clearly punished in some way.
Indeed, if I had done this, then some censure would be warranted. (Now, personally, I would expect that such censure would start with a comment from a moderator, saying something like: “<name of my interlocutor>, to be clear, Said is wrong about what the site’s rules and norms are; there is no obligation to respond to commenters. Said, please refrain from misleading other users about this.” Then subsequent occurrences of comments which were similarly misleading might receive some more substantive punishment, etc. That’s just my own, though I think a fairly reasonable, view of how this sort of moderation challenge should be approached.)
But I think that, taking the totality of my comments in the linked thread, it is difficult to support the claim that I somehow made false claims about site rules or norms. It seems to me that I was fairly clearly talking about general principles—about epistemology, not community organization.
Now, perhaps you think that I did not, in fact, make my meaning clear enough? Well, as I’ve said, these things do happen. Certainly it seems to me like step one to rectify the problem, such as it is, would be just to make a clear ex cathedra statement about what the rules and norms actually are. That mitigates any supposed damage. (Was this done? I don’t recall that it was. But perhaps I missed it.) Then there can be talk of punishment.[1]
But, of course, there already was a moderation warning issued for the incident in question. Which brings us back to the question of what it has to do with the current situation (and to my “arrest for a speeding ticket issued three years ago” analogy).
P.S.:
I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm
To be maximally clear: I neither believed nor (as far as I can recall) claimed this.
Although it seems to me that to speak in terms of “punishment”, when the offense (even taking as given that the offense took place at all) is something so essentially innocent as accidentally mis-characterizing an informal community norm, is, quite frankly, bizarrely harsh. I don’t think that I’ve ever participated in any other forum with such a stringent approach to moderation.
For a quick answer connecting the dots between “What does the recent Duncan/Said conflict have to do with Said’s past behavior,” I think your behavior in the various you/Duncan threads was bad in basically the same way we gave you a mod warning about 5 years ago, and also similar to a preliminary warning we gave you 6 years ago (in intercom, which ended in us deciding to take no action ath the time)
(i.e. some flavor of aggressiveness/insultingness, along with demanding more work from others than you were bringing yourself).
As I said, I cut you some slack for it because of some patterns Duncan brought to the table, but not that much slack.
The previous mod warning said “we’d ban you for a month if you did it again”, I don’t really feel great about that since over the past 5 years there’s been various comments that flirted with the same behavior and the cost of evaluating it each time is pretty high.
If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)
I will think on whether this changes anything for me. I do think it’s helpful, offhand I don’t feel that it completely (or obviously more than 50%) solves the problem, but, I do appreciate it and will think on it.
… bad in basically the same way we gave you a mod warning about 5 years ago …
I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?
Yeah I do find that comment/concept important. I think I basically already counting that class of thing in the list of positive things I’d mentioned elsethread, but yes, I am grateful to you for that. (Benquo being one to say it in that context is a bit more evidence of it’s weight which I had missed before, but I do think I was already weighting the concept approximately the right amount for the right reasons. Partly from having already generally updated on some parts of the Benquo worldview)
Please note, my point in linking that comment wasn’t to suggest that the things Benquo wrote are necessarily true and that the purported truth of those assertions, in itself, bears on the current situation. (Certainly I do agree with what he wrote—but then, I would, wouldn’t I?)
Rather, I was making a meta-level point. Namely: your thesis is that there is some behavior on my part which is bad, and that what makes it bad is that it makes post authors feel… bad in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right adjective is, here), and that as a consequence, they stop posting on Less Wrong. And as the primary example of this purported bad behavior, you linked the discussion in the comments of the “Zetetic Explanation” post by Benquo (which resulted in the mod warning you noted).
But the comment which I linked has Benquo writing, mere months afterward, that the sort of critique/objection/commentary which I write (including the sort which I wrote in response to his aforesaid post) is “helpful and important”, “very important to the success of an epistemic community”, etc. (Which, I must note, is tremendously to Benquo’s credit. I have the greatest respect for anyone who can view, and treat, their sometime critics in such a fair-minded way.)
This seems like very much the opposite of leaving Less Wrong as a result of my commenting style.
It seems to me that when the prime example you provide of my participation in discussions on Less Wrong purportedly being the sort of thing that drive authors away, actually turns out to be an example of exactly the opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly soon (months) thereafter saying that my critical comments are good and important to the community and that I should continue…
… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?
(And this, note, is an author who has written many posts, many of them quite highly upvoted, and whose writings I have often seen cited in all sorts of significant discussions, i.e., one who has contributed substantially to Less Wrong.)
The reason it’s not additional evidence to me is that I, too, find value in the comments you write for the reasons Benquo states, despite also finding them annoying at the time. So, Benquo’s response here seems like an additional instance of my viewpoint here, rather than a counterexample. (though I’m not claiming Benquo agrees with me on everything on this domain)
… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?
Said is asking Ray, not me, but I strongly disagree.
Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)
Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)
Point 3 is that benquo’s view on even that specific comment is not the only author-view that matters; benquo eventually being like “this critical feedback was great” does not mean that other authors watching the interaction at the time did not feel “ugh, I sure don’t want to write a post and have to deal with comments like this one.” (Said knows this, I think.)
(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we’re updating on benquo’s endorsements then it comes out to “both sets of norms useful,” presumably for different things.)
I’d say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like “yeah, fair, this did not turn out to be the best example,” not “oh snap, you’re right, turns out it was all a house of cards.”
(This will be my only comment in this chain, so as to avoid repeating past cycles.)
Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)
A black raven is, indeed, not strong evidence against white ravens. But that’s not quite the right analogy. The more accurate analogy would go somewhat like this:
Alice: White ravens exist! Bob: Yeah? For real? Where, can I see? Alice (looking around and then pointing): Right… there! That one! Bob (peering at the bird in question): But… that raven is actually black? Like, it’s definitely black and not white at all.
Now not only is Bob (once again, as he was at the start) in the position of having exactly zero examples of white ravens (Alice’s one purported example having been revealed to be not an example at all), but—and perhaps even more importantly!—Bob has reason to doubt not only Alice’s possession of any examples of her claim (of white ravens existing), but her very ability to correctly perceive what color any given raven is.
Now if Alice says “Well, I’ve seen a lot of white ravens, though”, Bob might quite reasonably reply: “Have you, though? Really? Because you just said that that raven was white, and it is definitely, totally black.” What’s more, not only Bob but also Alice herself ought rightly to significantly downgrade her confidence in her belief in white ravens (by a degree commensurate with how big a role her own supposed observations of white ravens have played in forming that belief).
Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)
Just so. But, once again, we must make our analysis more specific and more precise in order for it to be useful. There are two points to make in response to this.
First is what I said above: the point is not just that the commenting style/approach in question is valuable to some authors (although even that, by itself, is surely important!), but that it turns out to be valuable specifically to the author who served as an—indeed, as the—example of said commenting style/approach being bad. This calls into question not just the thesis that said approach is bad in general, but also the weight of any purported evidence of the approach’s badness, which comes from the same source as the now-controverted claim that it was bad for that specific author.
Second is that not all authors are equal.
Suppose, for example, that dozens of well-respected and highly valued authors all turned out to condemn my commenting style and my contributions, while those who showed up to defend me were all cranks, trolls, and troublemakers. It would still be true, then, to say that “my comments are valuable to some authors but displease others”, but of course the views of the “some” would be, in any reasonable weighting, vastly and overwhelmingly outweighed by the views of the “others”.
But that, of course, is clearly not what’s happening. And the fact that Benquo is certainly not some crank or troll or troublemaker, but a justly respected and valued contributor, is therefore quite relevant.
Point 3 is that benquo’s view on even that specific comment is not the only author-view that matters; benquo eventually being like “this critical feedback was great” does not mean that other authors watching the interaction at the time did not feel “ugh, I sure don’t want to write a post and have to deal with comments like this one.” (Said knows this, I think.)
First, for clarity, let me note that we are not talking (and Benquo was not talking) about a single specific comment, but many comments—indeed, an entire approach to commenting and forum participation. But that is a detail.
It’s true that Benquo’s own views on the matter aren’t the only relevant ones. But they surely are the most relevant. (Indeed, it’s hard to see how one could claim otherwise.)
And as far as “audience reactions” (so to speak) go, it seems to me that what’s good for the goose is good for the gander. Indeed, some authors (or potential authors) reading the interaction might have had the reaction you describe. But others could have had the opposite reaction. (And, judging by the comments in that discussion thread—as well as many other comments over the years—others in fact did have the opposite reaction, when reading that discussion and numerous others in which I’ve taken part.) What’s more, it is even possible (and, I think, not at all implausible) that some authors read Benquo’s months-later comment and thought “you know, he’s right”.
(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we’re updating on benquo’s endorsements then it comes out to “both sets of norms useful,” presumably for different things.)
Well, as I said in the grandparent comment, updating on Benquo’s endorsement is exactly what I was not suggesting that we do. (Not that I am suggesting the opposite—not updating on his endorsement—either. I am only saying that this was not my intended meaning.)
Still, I don’t think that what you say about “both sets of norms useful” is implausible. (I do not, after all, take exception to all of your preferred norms—quite the contrary! Most of them are good. And an argument can be made that even the ones to which I object have their place. Such an argument would have to actually be made, and convincingly, for me to believe it—but that it could be made, seems to me not to be entirely out of the question.)
I’d say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like “yeah, fair, this did not turn out to be the best example,” not “oh snap, you’re right, turns out it was all a house of cards.”
Well, as I’ve written, to the extent that the convincingness of an argument for some claim rests on examples (especially if it’s just one example), the purported example(s) turning out to be no such thing does, indeed, undermine the whole argument. (Especially—as I note above—insofar as that outcome also casts doubt on whatever process resulted in us believing that raven to have been white in the first place.)
By default, the rate limit applies to all posts, unless we’ve made an exception for it. There are two exceptions to it:
1. I just shipped the “ignore rate limits” flag on posts, which authors or admins can set so that a given post allows rate-limited comments to comment without restriction.
2. I haven’t shipped yet, but expect within the next day to ship “rate-limited authors can comment on their own posts without restriction.” (for the immediate future this just applies to authors, I expect to ship something that makes it work for coauthors)
In general, we are starting by rolling out the simplest versions of the rate-limiting feature (which is being used on many users, not just you), and solving problems as we notice them. I acknowledge this makes for some bad experiences along the way. I think I stand by that decision because I’m not even sure rate limits will turn out to work as a moderator tool, and investing like 3 months of upfront work ironing out the bugs first doesn’t seem like the right call.
For the general question of “whether a given such-and-such post will be rate limited”, the answer will route through “will individual authors choose to do set “ignoreRateLimit”, and/or will site admins choose to do it?”.
Ruby and I have some disagreements on how important it is to set the flag on moderation posts. I personally think it makes sense to be extra cautious about limiting people’s ability to speak in discussions that will impact their future ability to speak, since those can snowball and I think people are rightly wary of that. There are some other tradeoffs important to @Ruby, which I guess he can elaborate on if he wants.
Re: Open threads – I haven’t made a call yet, but I’m leaving the flag disabled/rate-limited-normally for now.
There is no limit to rate-limited-people editing their own comments. We might revisit it if it’s a problem but my current guess is rate-limitees editing their comments is pretty fine.
The check happens based on the timestamp of your last comment (it works via fetching comments within the time window and seeing if there are more than the allotted amount)
On LessWrong.com (but presumably not greaterwrong, atm) it should inform you that you’re not able to comment before you get started.
On LessWrong.com, it will probably (later, but not yet, not sure whether we’ll get to it this week), show an indicator that a commenter has been rate limited. (It’s fairly easy to do this when you open a comment-box to reply to them, there are some performance concerns for checking-to-display it on
I plan to add a list of rate-limited users to lesswrong.com/moderation. I think there’s a decent chance that goes live within a day or so.
Ruby and I have some disagreements on how important it is to set the flag on moderation posts.
A lot of this is that the set of “all moderation posts” covers a wide range of topics and the potential set “all rate limited users” might include a wide diversity of users, making me reluctant to commit upfront to not rate limits apply blanketly across the board on moderation posts.
The concern about excluding people from conversations that affect whether they get to speak is a valid consideration, but I think there are others too. Chiefly, people are likely rate limited primarily because they get in the way of productive conversation, and in so far as I care about moderation conversations going well, I might want to continue to exclude rate limited users there.
Note that there are ways, albeit with friction, for people to get to weigh in on moderation questions freely. If it seemed necessary, I’d be down with creating special un-rate-limited side-posts for moderation posts.
I am realizing that what seems reasonable here will depend on your conception of rate limits. A couple of conceptions you might have:
You’re currently not producing stuff that meets the bar for LessWrong, but you’re writing a lot, so we’ll rate limit you as a warning with teeth to up your quality.
We would have / are close to banning you, however we think rate limits might serve either as
a sufficient disincentive against the actions we dislike
a restriction that simply stops you getting into unproductive things, e.g. Demon Threads
Regarding 2., a banned user wouldn’t get to participate in moderation discussions either, so under that frame, it’s not clear rate limited users should get to. I guess it really depends if it was more of a warning / light rate ban or something more severe, close to an actual ban.
I can say more here, not exactly a complete thought. Will do so if people are interested.
I just shipped the “ignore rate limit” flag for posts, and removed the rate limit for this post. All users can set the flag on individual posts.
Currently they have to set it for each individual post. I think it’s moderately likely we’ll make it such that users can set it as a default setting, although I haven’t talked it through with other team members yet so can’t make an entirely confident statement on it. We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)
I’m working on a longer response to the other questions.
We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)
I could be misunderstanding all sorts of things about this feature that you’ve just implemented, but…
Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users’ posts? Shouldn’t I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
Shouldn’t I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
100+ karma means something like you’ve been vetted for some degree of investment in the site and enculturation, reducing the likelihood you’ll do something with poor judgment and ill intention. I might worry about new users creating posts that ignore rate limits, then attracting all the rate-limited new users who were not having good effects on the site to come comment there (haven’t thought about it hard, but it’s the kind of thing we consider).
The important thing is that the way the site currently works, any behavior on the site is likely to affect other parts of the site, such that to ensure the site is a well-kept garden, the site admins do have to consider which users should get which privileges.
(There are similarly restrictions on which users can be users from which posts.)
I expect Ray will respond more. My guess is you not being able to comment on this specific post is unintentional and it does indeed seem good to have a place where you can write more of a response to the moderation stuff.
The other details will likely be figured out as the feature gets used. My guess is how things behave are kind of random until we spend more time figuring out the details. My sense was that the feature was kind of thrown together and is now being iterated on more.
The discussion under this post is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.
I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded user, apparently on the pretext that some people don’t like his writing style.
Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn’t like Achmiz’s comments, they’re free to downvote (just as I am free to upvote). If someone doesn’t want to receive comments from Achmiz, they’re free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn’t like his three-year-old views about an author’s responsibility or lack thereof to reply to criticisms, they’re free to downvote or offer counterarguments. Why isn’t that the end of the matter?
My first comment on Overcoming Bias was on 15 December 2007. I was at the first Overcoming Bias meetup on 21 February 2008. Back then, there was no conept of being a “good citizen” of Overcoming Bias. It was a blog. People read the blog, and left comments when they had something to say, speaking in their own voice, accountable to no authority but their own perception of reality, with no obligation to be corrigible to the spirit of someone else’s models. Achmiz’s first comment on Less Wrongwas in May 2010.
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
Perhaps it will be replied that no one is being silenced—this is just a mere rate-limit, not any kind of persecution or restriction on speech. I don’t think Oliver Habryka is naïve enough to believe that. Citizenship—first-class citizenship—is a Schelling point. When someone tries to take that away from you, it would be foolish to believe that they don’t intend you any further harm.
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but… I think I don’t know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
LessWrong totally has prerequisites. I don’t think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven’t really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it’s doing something pretty similar to the outraged calls for “censorship” that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.
I don’t know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer:
Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)
Eliezer:
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
After all—anyone acculturated by academia knows that censorship is a very grave sin… in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors’ grading, and heaven forbid the janitors should speak up in the middle of a colloquium.
[...]
And after all—who will be the censor? Who can possibly be trusted with such power?
Quite a lot of people, probably, in any well-kept garden. But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—
(for such internal politics often seem like a matter of far greater import than mere invading barbarians)
—then trying to defend the community is typically depicted as a coup attempt. Who is this one who dares appoint themselves as judge and executioner? Do they think their ownership of the server means they own the people? Own our community? Do they think that control over the source code makes them a god?
You:
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
Eliezer:
Maybe it’s because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities. Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam). Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws. Maybe because I take it for granted that if you don’t like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).
And maybe because I, myself, have often been the one running the server. But I am consistent, usually being first in line to support moderators—even when they’re on the other side from me of the internal politics. I know what happens when an online community starts questioning its moderators. Any political enemy I have on a mailing list who’s popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator’s hat, I vocally support them—they need urging on, not restraining. People who’ve grown up in academia simply don’t realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of “free speech”.
Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving. But this is more accused than realized, so far as I can see.
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. While reading a comment at Less Wrong, in fact, though I don’t recall which one.
But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay. Being too humble, doubting themselves an order of magnitude more than I would have doubted them. It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you.
My current read here is that your objection is really a very standard “how dare the moderators moderate LessWrong” objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of “Well-Kept Gardens Die by Pacifism” in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z.
But again, I don’t think I super understood what specific question you were asking me, so I might have totally talked past you.
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don’t intend to make a “how dare the moderators moderate Less Wrong” objection. Rather, the objection is, “How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma.” (That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.) I’m saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don’t want to accept literally any speech (which is why the grandparent mentions “removing low-quality [...] comments” as a legitimate moderator duty).
Note that “permanently restrict the account of” is different from “moderate”. For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I’m accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.
Regarding Yudkowsky’s essay “Well-Kept Gardens Die By Pacifism”, please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That’s why the grandparent emphasizes that users who don’t like Achmiz’s comments are free to downvote them. The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I fear that Yudkowsky might have been right when he claimed that “[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.” I sincerely hope Less Wrong is worth saving.
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
But second, and more importantly, there is a huge bias in karma towards positive karma.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
I endorse much of Oliver’s replies, and I’m mostly burnt out from this convo at the moment so can’t do the followthrough here I’d ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said’s reference class is very high, and I’d treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don’t think the Spirit of LessWrong 2009 actually supports you on the specific claims you’re making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer’s buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.
But, honestly, I don’t actually think you really believe these meta-level arguments (or, at least won’t upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it. And, like, I do think there’s a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.
And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there’s an important spirit of early LessWrong that you keep alive, and I’ve made important updates due to your contributions. But, also, man it doesn’t look like your relationship with the site is necessarily that healthy for you.
...
I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there.
But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)
I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.
(As previously stated, I’m fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)
Not to respond to everything you’ve said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.
Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you’re saying, I don’t think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.
(I think in fact how it’s actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he’d have been banned long ago.)
Correct me though if there’s a deeper argument here that I’m not seeing.
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.
(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)
Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
I have on my todo list to write up a post that’s like “hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella’s post about it, and here’s why I think we should have a habit of noticing it.”.
Also for everyone’s awareness, I have since wrote up Tabooing “Frame Control” (which I’d hope would be like part 1 of 2 posts on the topic), but the reception of the post, i.e. 60ish karma, didn’t seem like everyone was like “okay yeah this concept is great”, and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to “it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this”, but I don’t fully remember.
Aella’s post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don’t think a concept should reach canonicity just on the basis of that post, given its specific flaws).
Arnold also proposes that awareness of frame control—a concept that Achmiz has criticized—become something one is “obligated to learn, as a good LW citizen”.
Arnold says he is thinking about maybe proposing that, in future, after he has done the work to justify it and paying attention to how people react to it.
Moderation action on Said
(See also: Ruby’s moderator warning for Duncan)
I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such.
I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.
Here’s a quick overview of how I think about Said moderation:
Re: Recent Duncan Conflict.
I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take on some object level stuff. I have a bit more to say but not much.
Overall pattern.
I think Said’s overall pattern of commenting includes a mix of “subtly enforcing norms that aren’t actual LW site norms (see below)”, “being pretty costly to interact with, in a way feels particularly ‘like a trap’”, and “in at least some domains, being consistently not-very-correct in his implied criticisms”. I think each of those things are at least a little bad in isolation (though not necessarily moderation-worthy). But I think they become worse than the sum-of-their-parts. If he was consistently doing the entire pattern, I would either ban him, or invent new tools to either alleviate-the-cost or tax-the-behavior in a less heavyhanded way.
Not sufficient corresponding upside
I’d be a lot less wary of the previous pattern if I felt like Said was also contributing significantly more value to LessWrong. [Edit: I do, to be clear, think Said has contributed significant value, both in terms of keeping the spirit of the sequences alive in the world ala readthesequences.com, and through being a voice with a relatively rare (these days) perspective that keeps us honest in important ways. But I think the costs are, in fact, really high, and I think the object level value isn’t enough to fully counterbalance it]
Prior discussion and warnings.
We’ve had numerous discussions with Said about this (I think we’ve easily spent 100+ hours of moderator-time on it, and probably more like 200), including an explicit moderation warning.
Few recent problematic pattern instances.
That all said, prior to this ~month’s conflict with Duncan, I don’t have a confident belief that Said has recently strongly embodied the pattern I’m worried about. I think it was more common ~5 years ago. I cut Said some slack for the convo with Duncan because I think Duncan is kind of frustrating to argue with.
THAT said, I think it’s crept up at least somewhat occasionally in the past 3 years, and having to evaluate whether it’s creeping up to an unacceptable level is fairly costly.
THAT THAT said, I do appreciate that the first time we gave him an explicit moderation notice, I don’t think we had any problems for ~3 years afterwards.
Strong(ish) statement of intent
Said’s made a number of comments that make me think he would still be doing a pattern I consider problematic if the opportunity arose. I think he’ll follow the letter of the law if we give it to him, but it’s difficult to specify a letter-of-the-law that does the thing I care about.
A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)
For now, after a bunch of discussion with other moderators, reading the thread-so-far, and talking with various advisors – my current call is giving Said a rate limit of 3-comments-per-post-per-week. See this post on the general philosophy of rate limiting as a moderation tool we’re experimenting with. I think there’s a decent chance we’ll ship some new features soon that make this actually a bit more lenient, but don’t want to promise that at the moment.
I am not very confident in this call, and am open to more counterarguments here, from Said or others. I’ll talk more about some of the reasoning here at the end of this comment. But I want to start by laying out some more background reasoning for the entire moderation decision.
In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.
(Note: one counterproposal I’ve seen is to develop a rate-limit based entirely on karma rather than moderator judgment, and that it is better to do this than to have moderators make individual judgment calls about specific users. I do think this idea has merit, although it’s hard to build. I have more to say about it at the end)
Said Patterns
3 years ago Habryka summarized a pattern we’d seen a lot:
I think the most central of this is in this thread on circling, where AFAICT Said asked for examples of some situations where social manipulation is “good.” Qiaochu and Sarah Constantin offer some examples. Said responds to both of them by questioning their examples and doubting their experience in a way that is pretty frustrating to respond to (and in the Sarah case seemed to me like a central example of Said missing the point, and the evo-psych argument not even making sense in context, which makes me distrust his taste on these matters). [1, 2]
I don’t actually remember more examples of that pattern offhand. I might be persuaded that I overupdated on some early examples. But after thinking a few days, I think a cruxy piece of evidence on how I think it makes sense to moderate Said is this comment from ~3 years ago:
For completeness, Said later elaborates:
Habryka and Said discussed it at length at the time.
I want to reiterate that I think asking for examples is fine (and would say the same thing for questions like “what do you mean by ‘spirituality’?” or whatnot). I agree that a) authors generally should try to provide examples in the first place, b) if they don’t respond to questions about examples, that’s bayesian evidence about whether their idea will ground out into something real. I’m fairly happy with clone of saturn’s variation on Said’s statement, that if the author can’t provide examples, “the post should be regarded as less trustworthy” (as opposed to “author should be interpreted as ignorant”), and gwern’s note that if they can’t, they should forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
The thing I object fairly strongly to is “there is an obligation on the part of the author to respond.”
I definitely don’t think there’s a social obligation, and I don’t think most LessWrongers think that. (I’m not sure if Said meant to imply that). Insofar as he means there’s a bayesian obligation-in-the-laws-of-observation/inference, I weakly agree but think he overstates it: there’s a lot of reasons an author might not respond (“belief that a given conversation won’t be productive,” “volume of such comments,” “trying to have a 202 conversation and not being interested in 101 objections,” and simple opportunity cost).
From a practical ‘things that the LessWrong culture should socially encourage people to do’, I liked Vladimir’s point that:
i.e. I want there to be good criticism on LW, and think that people feeling free to ignore criticism encourages more good criticism, in part by encouraging more posts and engagement.
It’s been a few years and I don’t know that Said still endorses the obligation phrasing, but much of my objection to Said’s individual commenting stylistic choices has a lot to do with reinforcing this feeling of obligation. I also think (less confidently) that they get an impression that Said thinks if an author hasn’t answered a question to his satisfaction (as an example of a reasonable median LW user), they should feel an [social] obligation to succeed at that.
Whether he intends this or not, I think it’s an impression that comes across, and which exerts social pressure, and I think this has a significant negative effect on the site.
I’m a bit confused about how to think about “prescribed norms” vs “good ideas that get selected on organically.” In a previous post Vladmir_Nesov argues that prescribing norms generally doesn’t make sense. Habryka had a similar take yesterday when I spoke with him. I’m not sure I agree (and some of my previous language here has probably assumed a somewhat more prescriptivist/top-down approach to moderating LessWrong that I may end up disendorsing after chatting more with Habryka)
But even in a more organic approach to moderation, I, Habryka and Ruby think it’s pretty reasonable for moderators to take action to prevent Said from implying that there’s some kind of norm here and exerting pressure around it on other people’s comment sections, when, AFAICT, there is no consensus of such a norm. I predict a majority of LessWrong members would not agree with that norm, either on normative-Bayesian terms nor consequentialist social-norm-design terms. (To be clear I think many people just haven’t thought about it at all, but expect them to at least weakly disagree when exposed to the arguments. “What is the actual collective endorsed position of the LW commentariat” is somewhat cruxy for me here)
Rate-limit decision reasoning
If this was our first (or second or third) argument with Said over this, I’d think stating this clearly and giving him a warning would be a reasonable next action. Given that we’ve been intermittently been arguing about this for 5 years, spending a hundred+ hours of mod time discussing it with him, it feels more reasonable to move to an ultimatum of “somehow, Said needs to stop exerting this pressure in other people’s comment threads, or moderators will take some kind of significant action to either limit the damage or impose a tax on it.”
If we were limited to our existing moderator tools, I would think it reasonable to ban him. But we are in the middle of setting up a variety of rate limiting tools to generally give mods more flexibility, and avoid being heavier-handed than we need to be.
I’m fairly open to a variety of options here. FWIW, I am interested in what Said actually prefers here. (I expect it is not a very fun conversation to be asked by the people-in-power “which way of constraining you from doing the thing you think is right seems least-bad to you?”, but, insofar as Said or others have an opinion on that I am interested)
I am interested in building a automated tool that detects demon threads and rate limits people based on voting patterns.. I most likely want to try to build such a tool regardless of what call we make on Said, and if I had a working version of such a tool I might be pretty satisfied with using it instead. My primary cruxes are
a) I think it’s a lot harder to build and I’m not sure we can succeed,
b) I do just think it’s okay for moderators to make judgment calls about individual users based on longterm trends. That’s sort of what mods are for. (I do think for established users it’s important for this process to be fairly costly and subjected to public scrutiny)
But for now, after chatting with Oli and Ruby and Robert, I’m implementing the 3-comments-per-post-per-week rule for Said. If we end up having time to build/validate an organic karma-based rate limit that solves the problem I’m worried about here, I might switch to that. Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be god to ship soon include:
There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)
Some reasons for this-specific-rate-limit rather than alternatives are:
3 comments within a week is enough for an initial back-and-forth where Said asks questions or makes a critique, the author responds, Said responds-to-the-response. (i.e. allowing the 4 layers of intellectual conversation, and getting the parts of Said comments that most people agree are valuable)
It caps the conversation out before it can spiral into unproductive escalatory thread.
It signals culturally that the problem here isn’t about initial requests for examples or criticisms, it’s about the pattern that tends to play out deeper in threads. I think it’s useful for this to be legible both to authors engaging with Said, and other comments inferring site norms (i.e. some amount of Socrates is good, too much can cause problems)
If 3 comments isn’t enough to fully resolve a conversation, it’s still possible to follow up eventually.
Said can still write top level posts arguing for norms that he thinks would be better, or arguing about specific posts that he thinks are problematic.
That all said, the idea of using rate-limits as a mod-tool is pretty new, I’m not actually sure how it’ll play out. Again, I’m open to alternatives. (And again, see this post for more thoughts on rate limiting)
Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said’s commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it’s permanent, it has a very rubber room vibe to it, where you don’t outright ban someone but continually humiliate them if they keep coming by and wish they’ll eventually get the hint.
A background model I want to put out here: two frames that feel relevant to me here are “harm minimization” and “taxing”. I think the behavior Said does has unacceptably large costs in aggregate (and, perhaps to remind/clarify, I think a similar-in-some-ways set of behaviors I’ve seen Duncan do also would have unacceptably large costs in aggregate).
And the three solutions I’d consider here, at some level of abstraction, are:
So-and-so agrees to stop doing the behavior (harder when the behavior is subtle and multifaceted, but, doable in principle)
Moderators restrict the user such that they can’t do the behavior to unacceptable degrees
Moderators tax the behavior such that doing-too-much-of-it is harder overall (but, it’s still something of the user’s choice if they want to do more of it and pay more tax).
All three options seem reasonable to me apriori, it’s mostly a question of “is there a good way to implement them?”. The current rate-limit-proposal for Said is mostly option 2. All else being equal I’d probably prefer option 3, but the options I can think of seem harder to implement and dev-time for this sort of thing is not unlimited.
Quick update for now: @Said Achmiz’s rate limit has expired, and I don’t plan to revisit applying-it-again unless a problem comes up.
I do feel like there’s some important stuff left unresolved here. @Zack_M_Davis’s comment on this other post asks some questions that seem worth answering.
I’d hoped to write up something longer this week but was fairly busy, and it seemed better to explicitly acknowledge it. For the immediate future I think improving on the auto-rate-limits and some other systemic stuff seems more important that arguing or clarifying the particular points here.
It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter “community notes” being attached to relevant comments that says something like “There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don’t feel he will productive to engage with. Discussion here”
Yeah I did list that as one of my options I’d consider in the previous announcement.
A problem I anticipate is that it’s some combination of ineffective, and also in some ways a harsher punishment. But if Said actively preferred some version of this solution I wouldn’t be opposed to doing it instead of rate-limiting.
Forgive me for making what may be an obvious suggestion which you’ve dismissed for some good reason, but… is there, actually, some reason why you can’t attach such a note to all comments? (UI-wise, perhaps as a note above the comment form, or something?) There isn’t an obligation, in terms of either the site rules or the community norms as the moderators have defined them, to respond to any comment, is there? (Perhaps with the exception of comments written by moderators…? Or maybe not even those?)
That is, it seems to me that the concern here can be characterized as a question of communicating forum norms to new participants. Can it not be treated as such? (It’s surely not unreasonable to want community members to refrain from actively interfering with the process of communicating rules and norms to newcomers, such as by lying to them about what those rules/norms are, or some such… but the problem, as such, is one which should be approached directly, by means of centralized action, no?)
I think it could be quite nice to give new users information about what site norms are and give a suggested spirit in which to engage with comments.
(Though I’m sure there’s lots of things it’d be quite nice to tell new users about the spirit of the site, but there’s of course bandwidth limitations on how much they’ll read, so just because it’s an improvement doesn’t mean it’s worth doing.)
If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?
(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)
Or rate-limiting, or applying any other such moderation action to.
This is not what I said though.
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment:
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don’t involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines.
If you have a sign in a space that says “don’t scream at people” but then lots of people do actually scream at you in that room, this doesn’t actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
My guess is you will respond to this with some statement of the form “but I have said many times that I do not think the norms are such that you have an obligation to respond”, but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
I can also imagine you responding to this with “but I can’t possible create an obligation to respond, the only people who can do that are the moderators”, which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.
I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes.
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.
But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.
So, you say:
If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.
This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:
“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”
(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don’t think I am saying particularly complicated things, and I think I’ve communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we’ll continue to take some moderator actions until things look better by our models. I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
This seems like an odd response.
In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.
In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.
You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
It’s important for users to know when it comes up. It doesn’t come up much except with you.
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I’ve been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?
(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)
No. This is still oversimplifying the issue, which I specifically disclaimed. Ben Pace gives a sense of it here:
The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it’s often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.
I’ve now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I’m declaring that we’ll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said’s comments in the Duncan/Said conflict count as a triggering instance).
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it’s not, I don’t really know what to do about that.
Alright, fair enough, so then…
… but then my next question is:
What the heck is “implicit enforcement of norms”??
To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?
You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.
So, questions:
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?
If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?
A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here “self” refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can’t unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.
Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.
So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)
Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)
That’s fair, but I predict that the central moderators’ complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.
If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.
(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)
Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
Well, I’m certainly all for that.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
I wholly agree.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
What would any of what you’re alluding to look like, more concretely…?
(Of course I also object to the term “de-escalation” here, due to the implication of “escalation”, but maybe that’s beside the point.)
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
Some examples are two comments up, as well as your list of things that don’t work. Another move not mentioned so far is deciding to exit certain conversations.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.
This might be a point of contention, but honestly, I don’t really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don’t have the time and patience to do this.
All right, I’ll give it a try (cc @Said Achmiz).
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
(Separately from my longer reply: I do want to thank you for making the attempt.)
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
Definitely. (As I’ve alluded to earlier in this comment section, I am quite familiar with this problem from the administrator’s side.)
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
Just for the record, your first comment was quite good at capturing some of the models that drive me and the other moderators.
This one is not, which is fine and wasn’t necessarily your goal, but I want to prevent any future misunderstandings.
I’m super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review of Order Without Law seems relevant. (And the book itself moreso, but that’s less linkable.)
I do recall reading and liking that post, though it’s been a while. I will re-read it when I have the chance.
But for now, a quick question: do you, in fact, think that the model described in that post applies here, on Less Wrong?
(If this starts to be effort I will tap out, but briefly:)
It’s been a long time since I read it too.
I don’t think there’s a specific thing I’d identify as “the model described in that post”.
There’s a hypothesis that forms an important core of the book and probably the review; but it’s not the core of the reason I pointed to it.
I do expect bits of both the book and the review apply on LW, yes.
Well, alright, fair enough.
Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)
Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...
I do have something in mind, but I apparently can’t write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that’s unlikely to be helpful. I don’t have any specific sections in mind.
(I think I’m unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)
Alright, no worries.
Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).
College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
I, too, am capable of describing such a model.
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)
Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)
Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
One last reply:
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.
Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.
But it sure isn’t the students!
(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)
Have you been to an American high school and/or watched at least one movie about American high schools?
I have done both of those things, yes.
EDIT: I have also attended not one but several (EDIT 2: four, in fact) American colleges.
The plot point of many high school movies is often about what is and isn’t acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.
I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.
Oh, the central latent variable in my uncertainty here is “is anyone willing to do this?” not “is anyone capable of this?”. My honest guess is the answer to that is “no” because this kind of conversation really doesn’t seem fun, and we are 7 levels deep into a 400 comment post.
My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don’t currently expect that to happen.
You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?
No, it boils down to “we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that’s not enough, then I guess that’s life and we’ll move on”.
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn’t involve many additional hours of effort.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you.
I agree that I personally haven’t put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements.
But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?
Don’t you think that’s an odd state of affairs, to put it mildly?
The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.
You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.
Does this seem to you like a remotely reasonable way to have rules?
But note that this, famously, is no longer true in our society today, which does indeed have some profoundly unjust consequences.
I think we’ve tried pretty hard to communicate our target rules in this post and previous ones.
The best operationalization of them is in this comment, as well as the moderation warning I made ~5 years ago: https://www.lesswrong.com/posts/9DhneE5BRGaCS2Cja/moderation-notes-re-recent-said-duncan-threads?commentId=y6AJFQtuXBAWD3TMT
These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don’t think this counts as being defined in “long, branching, deeply nested comment threads about specific moderation decisions”. I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward.
We are also thinking about how to think about having site-wide moderation norms and rules that are more canonical, though I share Ruby’s hesitations about that: https://www.lesswrong.com/posts/gugkWsfayJZnicAew/should-lw-have-an-official-list-of-norms
I don’t know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
EDIT: Why do my comments keep double-posting? Weird.
… that comment is supposed to communicate rules?!
It says:
The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)
And the rest pretty clearly suggests that there isn’t a clearly defined rule here.
The mod note from 5 years ago seems to me to be very clearly not defining any rules.
Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?
(What is the correct answer?)
How many of their answers would even match one another?
Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]
Have there been societies in the past which have worked like this? I don’t know. Maybe we can ask David Friedman?
Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?
That aside, I have questions about this rate limit:
Does it apply to all posts of any kind, written by anyone? More specifically:
Does it apply to both personal and frontpage posts?
Does it apply to posts written by moderators? Posts written about me (or specifically addressing me)? Posts written by moderators about me?
Does it apply to this post? (I assume that it must not, since you mention that you’d like me to make a case that so-and-so, you say “I am interested in what Said actually prefers here”, etc., but just want to confirm this)EDIT: See belowDoes it apply to “open thread” type posts (where the post itself is just a “container”, so to speak, and entirely different conversations may be happening under different top-level comments)?
Does it apply to my own posts? (That would be very strange, of course, but it wouldn’t be the strangest edge case that’s ever been left unhandled in a feature implementation, so seems worth checking…)
Does it apply retroactively to existing posts (including very old posts), or only new posts going forward?
Is there any way for a post author to disable this rate limit, or opt out of it?
Does the rate limit reset at a specific time each week, or is there simply a check for whether 3 posts have been written in the period starting one week before current time?
Is there any rate limit on editing comments, or only posting new ones? (It is presumably not the intent to have the rate limit triggered by fixing a typo, for instance…)
Is there a way for me to see the status of the rate limit prior to posting, or do I only find out whether the limit’s active when I try to post a comment and get an error?
Is there any UI cue to inform readers or other commenters (including a post’s author) that I can’t reply to a comment of theirs, e.g., due to the rate limit?
ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.
Aww christ I am very sorry about this. I had planned to ship the “posts can be manually overridden to ignore rate limiting” feature first thing this morning and apply it to this post, but I forgot that you’d still have made some comments less than a week ago which would block you for awhile. I agree that was a really terrible experience and I should have noticed it.
The feature is getting deployed now and will probably be live within a half hour.
For now, I’m manually applying the “ignore rate limit” flag to posts that seem relevant. (I’ll likely do a migration backfill on all posts by admins that are tagged “Site Meta”. I haven’t made a call yet about Open Threads)
I think some of your questions are answered in the previous comment:
I’ll write a more thorough response after we’ve finished deploying the “ignoreRateLimits flag for posts” PR.
Site Meta posts contains a lot more moderation, so not sure we should do that.
Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.
(While I recall you explicitly disclaiming such an obligation in some other recent comments… if you don’t think there is some kind of social norm about this, why did you previously use phrasing like “there is always such an obligation” and “Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?”. Even if you think most of your comments don’t have the described effect, I think the linked comment straightforwardly implies a social norm. And I think the attitude in that comment shines through in many of your other comments)
I think my actual crux “somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don’t think they’ll be productive to engage with.”
I believe “Said’s commenting style actively pushes against this in a norm-enforcing-feeling way”, but, as noted in the post, I’m still kind of confused about that (and I’ll say explicitly here: I am still not sure I’ve named the exact problem). I said a whole lot of words about various problems and caveats and how they fit together and I don’t think you can simplify it down to “the problem is X”. I said at the end, a major crux is “Said can adhere to the spirit of ‘“don’t imply people have an obligation to engage with your comments’,” where “spirit” is doing some important work of indicating the problem is fuzzy.
We’ve given you a ton of feedback about this over 5-6 years. I’m happy to talk or answer questions for a couple more days if the questions look like they’re aimed at ‘actually figure out how to comply with the spirit of the request’, but not more discussion of ‘is there a problem here from the moderator’s perspective?’.
I understand (and respect) that you think the moderators are wrong in several deep ways here, and I do honestly think it’s good/better for you to stick around with a generator of thoughts and criticism that’s somewhat uncorrelated with the site admin judgment” (but not free-reign to rehash it out in subtle conflict in other people’s comment sections)
I’m open (in the longterm) to arguments about whether our entire moderation policy is flawed, but that’s outside the scope of this moderation decision and you should argue about that in top-level posts and/or in posts by Zack/etc if it’s important to you)[random note that is probably implied but I want to make explicit: “enforcing standards that the LW community hasn’t collectively opted into in other people’s threads” is also essentially the criticism I’d make of many past comments of Duncans, although he goes about it in a pretty different way]
Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?
For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!
Or, take the links. One of them is clearly meant to be an example of the thing you described (and which I quoted). The others… don’t seem to be.[2] Are they just examples of things where you disagree with me? Again, fine and well, but is “being (allegedly) wrong about some non-obvious philosophical point” a moderation-worthy offense…? How do these other links fit into a description of what problem you’re solving?
And, perhaps just as importantly… how does any of this fit into… well, anything that has happened recently? All of your links are to discussions that took place three years ago. What is the connection of any of that to recent events? Are you suggesting that I have recently written comments that would give people the impression that Less Wrong has a social norm that imputes on post authors an obligation to respond to comments on their posts?
I ask these things not because I want to persuade you that there isn’t a problem, per se (I think there are many problems but of course my opinion differs from yours about what they are)—but, rather, because I can hardly comply with the rules, either in letter or in spirit or in any other way, when I don’t know what the rules are. From my perspective, what I seem to see the mods doing is the equivalent of the police stopping a person who’s walking down the street, saying “we’re taking you in for speeding”, and, in response to the confused citizen’s protests, explaining that he got a speeding ticket three years ago, and now they’re arresting him for exceeding the speed limit. Is this a long-delayed punishment? Is there a more recent offense? Is there some other reason for the arrest? Or what?
I think that people should feel comfortable ignoring and/or downvoting anyone’s comments if they don’t think engagement will be productive! Certainly I should not be any sort of exception to this. (Why in the world would I be? Of course you should engage only if you have some expectation that engaging will be productive, and not otherwise!)
If I write a comment and you think it is a bad comment (useless, obviously wrong, etc.), by all means downvote and ignore. Why not? And if I write another comment that says “you have an obligation to reply!”—I wouldn’t say that, because I don’t think that, but let’s say that I did—downvote and ignore that comment, too! Do this no matter who the commenter is!
Anyhow, if the problem really is essentially as I’ve summarized it, plus or minus some nuances and elaborations, then:
I really don’t see what any recent event have to do with anything, or how the rate limit solves it, or… really, this entire situation perplexes me, from that perspective. But,
If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)
Is that… I mean, does that solve the problem…?
Actually, you somewhat misconstrue the comment, by taking it out of context. That’s perhaps not too important, but worth noting. In any case, it’s a comment I wrote three years ago, in the middle of a long discussion, and as part of a longer and offhandedly-written description, spread over a number of comments, of my view—and which, moreover, takes its phrasing directly from the comment it was a reply to. These are hardly ideal conditions for expressing nuances of meaning. My view is that, when writing comments like this in the middle of a long discussion, it is neither necessary nor desirable to agonize over whether the phrasing and formulation is ideal, because anyone who disagrees or misunderstands can just reply to indicate that, and the confusion or disagreement can be hammered out in the replies. (And this is largely what happened in the given case.[3])
In particular, I can’t help but note that you link to a sub-thread which begins with me saying “This comment is a tangent, and I haven’t decided yet if it’s relevant to my main points or just incidental—”, i.e., where I pretty clearly signal that engagement isn’t necessarily critical, as far as the main discussion goes.
Perhaps you missed it, but I did write a comment in that discussion where I very explicitly wrote that “I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe”. Was that comment, despite my efforts, somehow unclear? That’s possible! These things happen. But is that a moderation-worth offense…?
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don’t have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn’t helping and isn’t being asked of them.
If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it’s often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren’t aware what you did was a crime.
Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).
Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?
I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some falsely-claimed norm. But for me to enforce anything seems impossible as a purely technical matter.)
Indeed, if I had done this, then some censure would be warranted. (Now, personally, I would expect that such censure would start with a comment from a moderator, saying something like: “<name of my interlocutor>, to be clear, Said is wrong about what the site’s rules and norms are; there is no obligation to respond to commenters. Said, please refrain from misleading other users about this.” Then subsequent occurrences of comments which were similarly misleading might receive some more substantive punishment, etc. That’s just my own, though I think a fairly reasonable, view of how this sort of moderation challenge should be approached.)
But I think that, taking the totality of my comments in the linked thread, it is difficult to support the claim that I somehow made false claims about site rules or norms. It seems to me that I was fairly clearly talking about general principles—about epistemology, not community organization.
Now, perhaps you think that I did not, in fact, make my meaning clear enough? Well, as I’ve said, these things do happen. Certainly it seems to me like step one to rectify the problem, such as it is, would be just to make a clear ex cathedra statement about what the rules and norms actually are. That mitigates any supposed damage. (Was this done? I don’t recall that it was. But perhaps I missed it.) Then there can be talk of punishment.[1]
But, of course, there already was a moderation warning issued for the incident in question. Which brings us back to the question of what it has to do with the current situation (and to my “arrest for a speeding ticket issued three years ago” analogy).
P.S.:
To be maximally clear: I neither believed nor (as far as I can recall) claimed this.
Although it seems to me that to speak in terms of “punishment”, when the offense (even taking as given that the offense took place at all) is something so essentially innocent as accidentally mis-characterizing an informal community norm, is, quite frankly, bizarrely harsh. I don’t think that I’ve ever participated in any other forum with such a stringent approach to moderation.
For a quick answer connecting the dots between “What does the recent Duncan/Said conflict have to do with Said’s past behavior,” I think your behavior in the various you/Duncan threads was bad in basically the same way we gave you a mod warning about 5 years ago, and also similar to a preliminary warning we gave you 6 years ago (in intercom, which ended in us deciding to take no action ath the time)
(i.e. some flavor of aggressiveness/insultingness, along with demanding more work from others than you were bringing yourself).
As I said, I cut you some slack for it because of some patterns Duncan brought to the table, but not that much slack.
The previous mod warning said “we’d ban you for a month if you did it again”, I don’t really feel great about that since over the past 5 years there’s been various comments that flirted with the same behavior and the cost of evaluating it each time is pretty high.
I will think on whether this changes anything for me. I do think it’s helpful, offhand I don’t feel that it completely (or obviously more than 50%) solves the problem, but, I do appreciate it and will think on it.
I wonder if you find this comment by Benquo (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?
Yeah I do find that comment/concept important. I think I basically already counting that class of thing in the list of positive things I’d mentioned elsethread, but yes, I am grateful to you for that. (Benquo being one to say it in that context is a bit more evidence of it’s weight which I had missed before, but I do think I was already weighting the concept approximately the right amount for the right reasons. Partly from having already generally updated on some parts of the Benquo worldview)
Please note, my point in linking that comment wasn’t to suggest that the things Benquo wrote are necessarily true and that the purported truth of those assertions, in itself, bears on the current situation. (Certainly I do agree with what he wrote—but then, I would, wouldn’t I?)
Rather, I was making a meta-level point. Namely: your thesis is that there is some behavior on my part which is bad, and that what makes it bad is that it makes post authors feel… bad in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right adjective is, here), and that as a consequence, they stop posting on Less Wrong. And as the primary example of this purported bad behavior, you linked the discussion in the comments of the “Zetetic Explanation” post by Benquo (which resulted in the mod warning you noted).
But the comment which I linked has Benquo writing, mere months afterward, that the sort of critique/objection/commentary which I write (including the sort which I wrote in response to his aforesaid post) is “helpful and important”, “very important to the success of an epistemic community”, etc. (Which, I must note, is tremendously to Benquo’s credit. I have the greatest respect for anyone who can view, and treat, their sometime critics in such a fair-minded way.)
This seems like very much the opposite of leaving Less Wrong as a result of my commenting style.
It seems to me that when the prime example you provide of my participation in discussions on Less Wrong purportedly being the sort of thing that drive authors away, actually turns out to be an example of exactly the opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly soon (months) thereafter saying that my critical comments are good and important to the community and that I should continue…
… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?
(And this, note, is an author who has written many posts, many of them quite highly upvoted, and whose writings I have often seen cited in all sorts of significant discussions, i.e., one who has contributed substantially to Less Wrong.)
The reason it’s not additional evidence to me is that I, too, find value in the comments you write for the reasons Benquo states, despite also finding them annoying at the time. So, Benquo’s response here seems like an additional instance of my viewpoint here, rather than a counterexample. (though I’m not claiming Benquo agrees with me on everything on this domain)
Said is asking Ray, not me, but I strongly disagree.
Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)
Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)
Point 3 is that benquo’s view on even that specific comment is not the only author-view that matters; benquo eventually being like “this critical feedback was great” does not mean that other authors watching the interaction at the time did not feel “ugh, I sure don’t want to write a post and have to deal with comments like this one.” (Said knows this, I think.)
(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we’re updating on benquo’s endorsements then it comes out to “both sets of norms useful,” presumably for different things.)
I’d say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like “yeah, fair, this did not turn out to be the best example,” not “oh snap, you’re right, turns out it was all a house of cards.”
(This will be my only comment in this chain, so as to avoid repeating past cycles.)
A black raven is, indeed, not strong evidence against white ravens. But that’s not quite the right analogy. The more accurate analogy would go somewhat like this:
Alice: White ravens exist!
Bob: Yeah? For real? Where, can I see?
Alice (looking around and then pointing): Right… there! That one!
Bob (peering at the bird in question): But… that raven is actually black? Like, it’s definitely black and not white at all.
Now not only is Bob (once again, as he was at the start) in the position of having exactly zero examples of white ravens (Alice’s one purported example having been revealed to be not an example at all), but—and perhaps even more importantly!—Bob has reason to doubt not only Alice’s possession of any examples of her claim (of white ravens existing), but her very ability to correctly perceive what color any given raven is.
Now if Alice says “Well, I’ve seen a lot of white ravens, though”, Bob might quite reasonably reply: “Have you, though? Really? Because you just said that that raven was white, and it is definitely, totally black.” What’s more, not only Bob but also Alice herself ought rightly to significantly downgrade her confidence in her belief in white ravens (by a degree commensurate with how big a role her own supposed observations of white ravens have played in forming that belief).
Just so. But, once again, we must make our analysis more specific and more precise in order for it to be useful. There are two points to make in response to this.
First is what I said above: the point is not just that the commenting style/approach in question is valuable to some authors (although even that, by itself, is surely important!), but that it turns out to be valuable specifically to the author who served as an—indeed, as the—example of said commenting style/approach being bad. This calls into question not just the thesis that said approach is bad in general, but also the weight of any purported evidence of the approach’s badness, which comes from the same source as the now-controverted claim that it was bad for that specific author.
Second is that not all authors are equal.
Suppose, for example, that dozens of well-respected and highly valued authors all turned out to condemn my commenting style and my contributions, while those who showed up to defend me were all cranks, trolls, and troublemakers. It would still be true, then, to say that “my comments are valuable to some authors but displease others”, but of course the views of the “some” would be, in any reasonable weighting, vastly and overwhelmingly outweighed by the views of the “others”.
But that, of course, is clearly not what’s happening. And the fact that Benquo is certainly not some crank or troll or troublemaker, but a justly respected and valued contributor, is therefore quite relevant.
First, for clarity, let me note that we are not talking (and Benquo was not talking) about a single specific comment, but many comments—indeed, an entire approach to commenting and forum participation. But that is a detail.
It’s true that Benquo’s own views on the matter aren’t the only relevant ones. But they surely are the most relevant. (Indeed, it’s hard to see how one could claim otherwise.)
And as far as “audience reactions” (so to speak) go, it seems to me that what’s good for the goose is good for the gander. Indeed, some authors (or potential authors) reading the interaction might have had the reaction you describe. But others could have had the opposite reaction. (And, judging by the comments in that discussion thread—as well as many other comments over the years—others in fact did have the opposite reaction, when reading that discussion and numerous others in which I’ve taken part.) What’s more, it is even possible (and, I think, not at all implausible) that some authors read Benquo’s months-later comment and thought “you know, he’s right”.
Well, as I said in the grandparent comment, updating on Benquo’s endorsement is exactly what I was not suggesting that we do. (Not that I am suggesting the opposite—not updating on his endorsement—either. I am only saying that this was not my intended meaning.)
Still, I don’t think that what you say about “both sets of norms useful” is implausible. (I do not, after all, take exception to all of your preferred norms—quite the contrary! Most of them are good. And an argument can be made that even the ones to which I object have their place. Such an argument would have to actually be made, and convincingly, for me to believe it—but that it could be made, seems to me not to be entirely out of the question.)
Well, as I’ve written, to the extent that the convincingness of an argument for some claim rests on examples (especially if it’s just one example), the purported example(s) turning out to be no such thing does, indeed, undermine the whole argument. (Especially—as I note above—insofar as that outcome also casts doubt on whatever process resulted in us believing that raven to have been white in the first place.)
Answering some other questions:
By default, the rate limit applies to all posts, unless we’ve made an exception for it. There are two exceptions to it:
1. I just shipped the “ignore rate limits” flag on posts, which authors or admins can set so that a given post allows rate-limited comments to comment without restriction.
2. I haven’t shipped yet, but expect within the next day to ship “rate-limited authors can comment on their own posts without restriction.” (for the immediate future this just applies to authors, I expect to ship something that makes it work for coauthors)
In general, we are starting by rolling out the simplest versions of the rate-limiting feature (which is being used on many users, not just you), and solving problems as we notice them. I acknowledge this makes for some bad experiences along the way. I think I stand by that decision because I’m not even sure rate limits will turn out to work as a moderator tool, and investing like 3 months of upfront work ironing out the bugs first doesn’t seem like the right call.
For the general question of “whether a given such-and-such post will be rate limited”, the answer will route through “will individual authors choose to do set “ignoreRateLimit”, and/or will site admins choose to do it?”.
Ruby and I have some disagreements on how important it is to set the flag on moderation posts. I personally think it makes sense to be extra cautious about limiting people’s ability to speak in discussions that will impact their future ability to speak, since those can snowball and I think people are rightly wary of that. There are some other tradeoffs important to @Ruby, which I guess he can elaborate on if he wants.
For now, I’m toggling on the ignoreRateLimits flag on most of my own moderation posts (I’ve currently done so for LW Team is adjusting moderation policy and “Rate limiting” as a mod tool)
Other random questions:
Re: Open threads – I haven’t made a call yet, but I’m leaving the flag disabled/rate-limited-normally for now.
There is no limit to rate-limited-people editing their own comments. We might revisit it if it’s a problem but my current guess is rate-limitees editing their comments is pretty fine.
The check happens based on the timestamp of your last comment (it works via fetching comments within the time window and seeing if there are more than the allotted amount)
On LessWrong.com (but presumably not greaterwrong, atm) it should inform you that you’re not able to comment before you get started.
On LessWrong.com, it will probably (later, but not yet, not sure whether we’ll get to it this week), show an indicator that a commenter has been rate limited. (It’s fairly easy to do this when you open a comment-box to reply to them, there are some performance concerns for checking-to-display it on
I plan to add a list of rate-limited users to lesswrong.com/moderation. I think there’s a decent chance that goes live within a day or so.
A lot of this is that the set of “all moderation posts” covers a wide range of topics and the potential set “all rate limited users” might include a wide diversity of users, making me reluctant to commit upfront to not rate limits apply blanketly across the board on moderation posts.
The concern about excluding people from conversations that affect whether they get to speak is a valid consideration, but I think there are others too. Chiefly, people are likely rate limited primarily because they get in the way of productive conversation, and in so far as I care about moderation conversations going well, I might want to continue to exclude rate limited users there.
Note that there are ways, albeit with friction, for people to get to weigh in on moderation questions freely. If it seemed necessary, I’d be down with creating special un-rate-limited side-posts for moderation posts.
I am realizing that what seems reasonable here will depend on your conception of rate limits. A couple of conceptions you might have:
You’re currently not producing stuff that meets the bar for LessWrong, but you’re writing a lot, so we’ll rate limit you as a warning with teeth to up your quality.
We would have / are close to banning you, however we think rate limits might serve either as
a sufficient disincentive against the actions we dislike
a restriction that simply stops you getting into unproductive things, e.g. Demon Threads
Regarding 2., a banned user wouldn’t get to participate in moderation discussions either, so under that frame, it’s not clear rate limited users should get to. I guess it really depends if it was more of a warning / light rate ban or something more severe, close to an actual ban.
I can say more here, not exactly a complete thought. Will do so if people are interested.
I just shipped the “ignore rate limit” flag for posts, and removed the rate limit for this post. All users can set the flag on individual posts.
Currently they have to set it for each individual post. I think it’s moderately likely we’ll make it such that users can set it as a default setting, although I haven’t talked it through with other team members yet so can’t make an entirely confident statement on it. We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)
I’m working on a longer response to the other questions.
I could be misunderstanding all sorts of things about this feature that you’ve just implemented, but…
Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users’ posts? Shouldn’t I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?
100+ karma means something like you’ve been vetted for some degree of investment in the site and enculturation, reducing the likelihood you’ll do something with poor judgment and ill intention. I might worry about new users creating posts that ignore rate limits, then attracting all the rate-limited new users who were not having good effects on the site to come comment there (haven’t thought about it hard, but it’s the kind of thing we consider).
The important thing is that the way the site currently works, any behavior on the site is likely to affect other parts of the site, such that to ensure the site is a well-kept garden, the site admins do have to consider which users should get which privileges.
(There are similarly restrictions on which users can be users from which posts.)
I expect Ray will respond more. My guess is you not being able to comment on this specific post is unintentional and it does indeed seem good to have a place where you can write more of a response to the moderation stuff.
The other details will likely be figured out as the feature gets used. My guess is how things behave are kind of random until we spend more time figuring out the details. My sense was that the feature was kind of thrown together and is now being iterated on more.
The discussion under this post is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.
I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded user, apparently on the pretext that some people don’t like his writing style.
Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn’t like Achmiz’s comments, they’re free to downvote (just as I am free to upvote). If someone doesn’t want to receive comments from Achmiz, they’re free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn’t like his three-year-old views about an author’s responsibility or lack thereof to reply to criticisms, they’re free to downvote or offer counterarguments. Why isn’t that the end of the matter?
Elsewhere, Raymond Arnold complains that Achmiz isn’t “corrigible about actually integrating the spirit-of-our-models into his commenting style”. Arnold also proposes that awareness of frame control—a concept that Achmiz has criticized—become something one is “obligated to learn, as a good LW citizen”. I find this attitude shockingly anti-intellectual. Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)
My first comment on Overcoming Bias was on 15 December 2007. I was at the first Overcoming Bias meetup on 21 February 2008. Back then, there was no conept of being a “good citizen” of Overcoming Bias. It was a blog. People read the blog, and left comments when they had something to say, speaking in their own voice, accountable to no authority but their own perception of reality, with no obligation to be corrigible to the spirit of someone else’s models. Achmiz’s first comment on Less Wrong was in May 2010.
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
Perhaps it will be replied that no one is being silenced—this is just a mere rate-limit, not any kind of persecution or restriction on speech. I don’t think Oliver Habryka is naïve enough to believe that. Citizenship—first-class citizenship—is a Schelling point. When someone tries to take that away from you, it would be foolish to believe that they don’t intend you any further harm.
I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but… I think I don’t know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
LessWrong totally has prerequisites. I don’t think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven’t really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it’s doing something pretty similar to the outraged calls for “censorship” that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.
I don’t know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer:
Eliezer:
You:
Eliezer:
Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you.
My current read here is that your objection is really a very standard “how dare the moderators moderate LessWrong” objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of “Well-Kept Gardens Die by Pacifism” in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z.
But again, I don’t think I super understood what specific question you were asking me, so I might have totally talked past you.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don’t intend to make a “how dare the moderators moderate Less Wrong” objection. Rather, the objection is, “How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma.” (That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.) I’m saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don’t want to accept literally any speech (which is why the grandparent mentions “removing low-quality [...] comments” as a legitimate moderator duty).
Note that “permanently restrict the account of” is different from “moderate”. For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I’m accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.
Regarding Yudkowsky’s essay “Well-Kept Gardens Die By Pacifism”, please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That’s why the grandparent emphasizes that users who don’t like Achmiz’s comments are free to downvote them. The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I fear that Yudkowsky might have been right when he claimed that “[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.” I sincerely hope Less Wrong is worth saving.
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
Cool, makes sense. Also happy to chat in-person sometime if you want.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
I endorse much of Oliver’s replies, and I’m mostly burnt out from this convo at the moment so can’t do the followthrough here I’d ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said’s reference class is very high, and I’d treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don’t think the Spirit of LessWrong 2009 actually supports you on the specific claims you’re making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer’s buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.
But, honestly, I don’t actually think you really believe these meta-level arguments (or, at least won’t upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it. And, like, I do think there’s a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.
And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there’s an important spirit of early LessWrong that you keep alive, and I’ve made important updates due to your contributions. But, also, man it doesn’t look like your relationship with the site is necessarily that healthy for you.
...
I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there.
But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)
I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.
(As previously stated, I’m fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)
Not to respond to everything you’ve said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.
Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you’re saying, I don’t think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.
(I think in fact how it’s actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he’d have been banned long ago.)
Correct me though if there’s a deeper argument here that I’m not seeing.
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.
(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
Also for everyone’s awareness, I have since wrote up Tabooing “Frame Control” (which I’d hope would be like part 1 of 2 posts on the topic), but the reception of the post, i.e. 60ish karma, didn’t seem like everyone was like “okay yeah this concept is great”, and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to “it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this”, but I don’t fully remember.
Aella’s post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don’t think a concept should reach canonicity just on the basis of that post, given its specific flaws).
Arnold says he is thinking about maybe proposing that, in future, after he has done the work to justify it and paying attention to how people react to it.