I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here.
Sure, but… I think I don’t know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
LessWrong totally has prerequisites. I don’t think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven’t really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it’s doing something pretty similar to the outraged calls for “censorship” that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.
I don’t know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer:
Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)
Eliezer:
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
After all—anyone acculturated by academia knows that censorship is a very grave sin… in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors’ grading, and heaven forbid the janitors should speak up in the middle of a colloquium.
[...]
And after all—who will be the censor? Who can possibly be trusted with such power?
Quite a lot of people, probably, in any well-kept garden. But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—
(for such internal politics often seem like a matter of far greater import than mere invading barbarians)
—then trying to defend the community is typically depicted as a coup attempt. Who is this one who dares appoint themselves as judge and executioner? Do they think their ownership of the server means they own the people? Own our community? Do they think that control over the source code makes them a god?
You:
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
Eliezer:
Maybe it’s because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities. Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam). Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws. Maybe because I take it for granted that if you don’t like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).
And maybe because I, myself, have often been the one running the server. But I am consistent, usually being first in line to support moderators—even when they’re on the other side from me of the internal politics. I know what happens when an online community starts questioning its moderators. Any political enemy I have on a mailing list who’s popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator’s hat, I vocally support them—they need urging on, not restraining. People who’ve grown up in academia simply don’t realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of “free speech”.
Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving. But this is more accused than realized, so far as I can see.
In any case the light didn’t go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently. While reading a comment at Less Wrong, in fact, though I don’t recall which one.
But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay. Being too humble, doubting themselves an order of magnitude more than I would have doubted them. It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence.
Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you.
My current read here is that your objection is really a very standard “how dare the moderators moderate LessWrong” objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of “Well-Kept Gardens Die by Pacifism” in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z.
But again, I don’t think I super understood what specific question you were asking me, so I might have totally talked past you.
But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don’t intend to make a “how dare the moderators moderate Less Wrong” objection. Rather, the objection is, “How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma.” (That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.) I’m saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don’t want to accept literally any speech (which is why the grandparent mentions “removing low-quality [...] comments” as a legitimate moderator duty).
Note that “permanently restrict the account of” is different from “moderate”. For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I’m accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.
Regarding Yudkowsky’s essay “Well-Kept Gardens Die By Pacifism”, please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That’s why the grandparent emphasizes that users who don’t like Achmiz’s comments are free to downvote them. The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I fear that Yudkowsky might have been right when he claimed that “[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.” I sincerely hope Less Wrong is worth saving.
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
But second, and more importantly, there is a huge bias in karma towards positive karma.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?
I endorse much of Oliver’s replies, and I’m mostly burnt out from this convo at the moment so can’t do the followthrough here I’d ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said’s reference class is very high, and I’d treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don’t think the Spirit of LessWrong 2009 actually supports you on the specific claims you’re making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer’s buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.
But, honestly, I don’t actually think you really believe these meta-level arguments (or, at least won’t upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it. And, like, I do think there’s a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.
And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there’s an important spirit of early LessWrong that you keep alive, and I’ve made important updates due to your contributions. But, also, man it doesn’t look like your relationship with the site is necessarily that healthy for you.
...
I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there.
But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)
I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.
(As previously stated, I’m fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)
Not to respond to everything you’ve said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.
Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you’re saying, I don’t think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.
(I think in fact how it’s actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he’d have been banned long ago.)
Correct me though if there’s a deeper argument here that I’m not seeing.
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.
(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)
Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
I have on my todo list to write up a post that’s like “hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella’s post about it, and here’s why I think we should have a habit of noticing it.”.
Also for everyone’s awareness, I have since wrote up Tabooing “Frame Control” (which I’d hope would be like part 1 of 2 posts on the topic), but the reception of the post, i.e. 60ish karma, didn’t seem like everyone was like “okay yeah this concept is great”, and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to “it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this”, but I don’t fully remember.
Aella’s post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don’t think a concept should reach canonicity just on the basis of that post, given its specific flaws).
Sure, but… I think I don’t know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more.
Some quick thoughts:
LessWrong totally has prerequisites. I don’t think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven’t really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
Well-Kept Gardens Die by Pacifism is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it’s doing something pretty similar to the outraged calls for “censorship” that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.
I don’t know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer:
Eliezer:
You:
Eliezer:
Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you.
My current read here is that your objection is really a very standard “how dare the moderators moderate LessWrong” objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of “Well-Kept Gardens Die by Pacifism” in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z.
But again, I don’t think I super understood what specific question you were asking me, so I might have totally talked past you.
I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.
Thanks, to clarify: I don’t intend to make a “how dare the moderators moderate Less Wrong” objection. Rather, the objection is, “How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma.” (That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.) I’m saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don’t want to accept literally any speech (which is why the grandparent mentions “removing low-quality [...] comments” as a legitimate moderator duty).
Note that “permanently restrict the account of” is different from “moderate”. For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic, and Achmiz complied. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I’m accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.
Regarding Yudkowsky’s essay “Well-Kept Gardens Die By Pacifism”, please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That’s why the grandparent emphasizes that users who don’t like Achmiz’s comments are free to downvote them. The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I fear that Yudkowsky might have been right when he claimed that “[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.” I sincerely hope Less Wrong is worth saving.
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
Cool, makes sense. Also happy to chat in-person sometime if you want.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
I endorse much of Oliver’s replies, and I’m mostly burnt out from this convo at the moment so can’t do the followthrough here I’d ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:
Yes, the bar for banning or permanently limiting the speech of a longterm member in Said’s reference class is very high, and I’d treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens.
I don’t think the Spirit of LessWrong 2009 actually supports you on the specific claims you’re making here.
As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer’s buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.
But, honestly, I don’t actually think you really believe these meta-level arguments (or, at least won’t upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it. And, like, I do think there’s a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.
And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there’s an important spirit of early LessWrong that you keep alive, and I’ve made important updates due to your contributions. But, also, man it doesn’t look like your relationship with the site is necessarily that healthy for you.
...
I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there.
But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)
I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.
(As previously stated, I’m fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)
Not to respond to everything you’ve said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.
Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you’re saying, I don’t think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.
(I think in fact how it’s actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he’d have been banned long ago.)
Correct me though if there’s a deeper argument here that I’m not seeing.
In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.
It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.
(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)
Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?
In the comment Zack cites, Raemon said the same when raising the idea of making it a prerequisite:
Also for everyone’s awareness, I have since wrote up Tabooing “Frame Control” (which I’d hope would be like part 1 of 2 posts on the topic), but the reception of the post, i.e. 60ish karma, didn’t seem like everyone was like “okay yeah this concept is great”, and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.
Yep! As far as I remember the thread Ray said something akin to “it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this”, but I don’t fully remember.
Aella’s post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don’t think a concept should reach canonicity just on the basis of that post, given its specific flaws).