Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
That’s why the grandparent specifies “long-time, well-regarded”, “many highly-upvoted contributions”, “We were here first”, &c.
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
The grandparent also points out that users who don’t want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don’t see what actual problem exists that’s not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
But second, and more importantly, there is a huge bias in karma towards positive karma.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)
Hmm, I am still not fully sure about the question (your original comment said “I think Oli Habryka has the integrity to give me a staight, no-bullshit answer here”, which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit.
There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said’s net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here.
One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like “purchase him out of his right to use LessWrong” or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don’t want this to feel like some kind of social slapdown.
Now, commenting on the individual pieces:
Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is “well-regarded”. My sense is Said is quite polarizing and saying that he is a “long-time ill-regarded” user would be just as accurate. Similarly saying “many highly-downvoted contributions” is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don’t currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).
This is not to say I would consider a summary that describes Said as a “long-time ill-regarded menace with many highly downvoted contributions” as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site.
Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with −70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0.
This is again not to say that I am actually confident that Said’s commenting contributions have been net-negative for the site. My current best guess is yes, but it’s not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn’t seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules.
I think people responded pretty extensively to the comment you mention here, but to give my personal response to this:
Most people (and especially new users) don’t keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience.
Downvoting helps a bit with reducing visibility, but it doesn’t help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone’s comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn’t help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn’t really apply here). We can try to make automated systems here, but I can’t currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it’s worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).
Separately, I want to also make a bigger picture point about moderation on LessWrong:
LessWrong moderation definitely works on a case-law basis
There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we’ve always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary.
This case is the same. Yep, we’ve decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it’s good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we’ve moderated Said here.
I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don’t know which one, though I hope that what I’ve written above clarifies things at least a bit.
I don’t know if it’s good that there’s a positive bias towards karma, but I’m pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular.
I think I mostly meant “answer” in the sense of “reply” (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.
I have a lot of extremely strong disagreements with this, but they can wait three months.
Cool, makes sense. Also happy to chat in-person sometime if you want.
What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?
how is this even a reasonable-
Isn’t this community close in idea terms to Effective Altruism? Wouldn’t it be better to say “Said, if you change your commenting habits in the manner we prescribe, we’ll donate $10k-$100k to a charity of your choice?”
I can’t believe there’s a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I’ve been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.
Seems sad! Seems like there is an opportunity for trade here.
Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.
And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.
We can also donate instead, but I don’t really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don’t really get what this would improve. Also, not everyone cares about donating to charity, and that’s fine.
The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between “good user” and “ban”.
I guess I’m having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I’m familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don’t know).
I do want to note that my problem isn’t with offering Said money—any offer to any user of any Internet community feels… extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that’s contracting and not unusual. I’m not even necessarily offended by such an offer, just, again, extremely surprised.
I think if you model things as just “an internet community” this will give you the wrong intuitions.
I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone’s access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable.
I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn’t seem totally crazy.
This, plus Vaniver’s comment, has made me update—LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.
I’ve had a nagging feeling in the past that the rationalist community isn’t careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I’ve seen have been kinda small-scale and so I haven’t really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton’s fence.
It might help to think of LW as more like a small town’s newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with “business expense” lenses instead of “personal budget” lenses.
Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn’t really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he’s made, paid to settle a legal issue… the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.
Exactly. It’s hilarious and awesome. (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)