The Hamming Problem of Group Rationality
The upcoming moderation changes are long-term catastrophic. They’re deliberately making it easy for high-status people to silence anyone and everyone they don’t want to hear from, without penalty or stigma. I trust literally no one with this power, even if they have the best of intentions; the social-monkey incentives to abuse it are strong, and of all instincts that one’s probably the best at getting around conscious intentions; according to one of the stronger prevailing theories, it’s literally what our intelligence developed to do.
This is especially bad because as I see it, protecting group epistemics from corrupting social incentives is the Hamming problem of group rationality. The main difference between pursuing the art of rationality alone vs in a group is that in a group you have to manage politics, and politics is an especially strong mind-killer at the small-group level, even more so than at the national tribal level.
So as far as I’m concerned, the dev team has forfeited the war to win one temporary battle (getting Eliezer back on the site). And as significant a battle as that is, it’s not worth the war.
I think you’re ignoring a bunch of things:
(1) There’s at least as strong of a social incentive on LW to respond fairly to criticism and disagreement rather than delete things outright. I noticed, for example, that I expected to (and did) get lots of karma from responding to your doomsaying on the moderation threads and would have mildly preferred to continue the conversation instead of just deleting your comments, had I had the power. Strengthening and bulletproofing these social incentives seems to me to be the correct solution to this Hamming problem.
(2) The moderation changes are experimental, not set in stone, and given how well LW2.0 has been doing so far in such a miniscule length of time, we should give the team time to notice errors and correct them.
(3) For all practical purposes LW is under “Reign of Terror” moderation by the current mod team. I’m curious what your crux is for believing individual authors will be significantly more brutal about the job than the mod team is; that’s opposite of what I would expect.
The social incentives favor authors doing it more, and are ambivalent for the mods. Though I don’t trust them either, particularly after such a massive failure of judgment as proposing this change.
Facebook, Tumblr, and Wordpress all give authors this power. Why is it catastrophic here, where it wasn’t there? Or do you think moderation has turned out to be catastrophic in those places?
I am not PDV, so perhaps he would answer differently, but here’s my take:
On Facebook and Tumblr, this has absolutely turned out to be catastrophic. (To be clear, Facebook and Tumblr are a pair of massive catastrophes for many reasons and in many ways; this particular issue is just another house fire in the middle of a nuclear conflagration.)
“Wordpress” is an inapt comparison. The relevant unit there is the individual Wordpress blog. We can observe two things:
All Wordpress blog owners have the ability to delete comments tracelessly; and this does indeed affect the kinds of discussions that can be, and are, had, in such places. But…
Wordpress blog owners vary dramatically on the extent to which they use this power; and some have gained a reputation for basically never using it. You will find that such blogs are very different places from the blogs on which the owner does use his comment-deletion power…
… which once again points to the critical necessity of being able to tell when (and how often, etc.) someone is using such a power; hence the need for a moderation log.
Wordpress seems like a very apt comparison, since LessWrong is also being conceptualized as a bunch of individual blogs with varying moderation policies.
Does Wordpress have such a system?
(To be clear, I support the idea of a moderation log. I’m just curious whether it’s actually as necessary as you claim.)
No, at least, not in an automated way without plugins, and not that I can recall seeing on any Wordpress blog I’ve visited. And I don’t think I’d want one for my own Wordpress blog; comments there fall very neatly into two categories, “legitimate comments that I want to keep” and “spam by automated web crawlers that I want to obliterate without a trace”. I wouldn’t want a public moderation log if that log was just going to be a list of spam links.
FYI, a major point of disagreement of mine is that I’d be pretty surprised if a SlatestarCodex that didn’t have the moderation log would have a dramatically different comment section. I think the comment quality is determined primarily by founder effects (i.e. what does Scott talk about, and which initial friends of his comment regularly?)
(insofar as it did have a different comment section, I suspect it’d be one that you liked slightly less and I liked slightly more. In general the comments have gotten better as Scott has been willing to ban people more arbitrarily AFAICT, and I don’t have a sense that the people who’s comments I value would leave if the mod list was gone or never implemented, but wouldn’t be surprised if you had the reverse sense)
I think SSC without the moderation log absolutely would have a dramatically different comment section, but… I hesitate to speak of the reason in public because it touches on certain issues that, as I understand, Scott prefers not to see discussed overtly in connection with the site, and I’d like to respect that preference. However, I think this is an important matter; perhaps we could discuss it privately? I can be found via webchat at chat.myfullname.net, or by email at myfirstname@myfullname.net.
I think this is likely true (which is to say, it’s true that Scott has been willing to ban people more, and it’s true that the comment section has improved; correlation is of course not causation, but it does seem plausible in this case).
However, I think without the mod log, things would be different (see above).
Yup, happy to discuss that privately. Will probably ping you later tonight.
They don’t have intellectual progress as a goal.
I agree with your general point (“protecting group epistemics from corrupting social incentives is the Hamming problem of group rationality”). I am not sure whether I agree with your specific point (that the upcoming moderation changes are long-term catastrophic). Could you clarify which implementation of the new moderation tools you consider to be catastrophic, and which not (or am I misunderstanding what aspect of the changes you mean)?
(That is, which of these would be catastrophic, if enabled: (a) totally traceless deletion; (b) deletion, but with a moderation log that shows the deletion event but not the deleted content; (c) deletion, but with a moderation log that shows the deletion event and the deleted content; (d) in-place hiding?)
Edit: By the way, I assume you actually meant to say “protecting social incentives from corrupting group epistemics is the Hamming problem of group rationality”, and not the other way around (like you currently have it)?
By the way, it’s not even clear that this battle is won yet. I certainly don’t see any new posts from Eliezer (Inadequate Equilibria doesn’t count), and definitely no comments from him (and if he posts but doesn’t interact with commenters, then there’s no point).
(This is not to disagree with the OP, to be clear; quite the opposite.)
Just to clarify, if only Eliezer had requested stronger moderation capabilities, we would have not built stronger moderation capabilities. The problem of LessWrong being a place that feels hostile to a lot of historical top-contributors, and to a lot of potentially new top-contributors is a major one that has been on my radar since I started the whole LW 2.0 project.
So while people are obviously open to have their own models of why the LW2 team is doing various things, the reasoning presented here does not resonate with me, and does not seem to reflect my internal experience of making this decision.
This is all fair enough, but to clarify, my comment was aimed only at the notion that the goal of “get Eliezer to use the site” has been achieved, not whether it was the only goal, or anything else.
You are wrong about your own motivations in a way trivially predictable by monkey dynamics.
In line with this, I have given up on Lesserwrong. It’s clearly not going to be a source of insight I can trust for much longer, and I have doubts that it was any time recently.
I am in the process of taking everything I posted here and putting it back on my personal blog. After that’s been done, I don’t know whether I will interact with this site at all, since the main contribution I feel is needed is banned and the mods have threatened to ban me as well.
What specific contribution is banned? Is there a link to the threatened PDV-banning?
(It occurs to me that that might be read as a coded way of accusing you of lying. That is not in any way my intention.)
We sent PDV a message saying we were worried about him causing a lot of unnecessary conflict on the site, and that if that trend continues, it probably makes most sense to restrict his ability to comment in some ways, and if that doesn’t help, ban him. (In general, we try to reach out to people privately first, before giving them a public warning, since I think there are a lot of weird status dynamics that come into play with a public warning that makes it both more stressful for us and the person we are talking to).
If PDV is open to that, I would be happy to share the whole conversation we had with him here.
The following is in no way a comment on this specific moderator-user interaction (about which it is not my place to speak), but a general comment about approaches to moderation:
Asking someone, even in private (perhaps especially in private) to accept some sort of punishment (such as restrictions on commenting), or informing them that you’re going to apply corrective measures and that they might then get back into your good graces (in order to avoid further sanction), also involves a lot of—as you put it—“weird status dynamics”.
In general, the approach you describe—though I understand some of the reasons why it appeals to you (some of which are indeed perfectly noble and praiseworthy reasons)—essentially sets you up as a “corrective” authority. There are some quite unfortunate status/etc. implications of such a relationship.
Also important to mention here: If there would be a representative poll of the user base of LW2.0 that does indeed show that people prefer having moderators warn them in public as opposed to send them PMs, then I would totally switch to that policy.
I would love to see such a poll be conducted.
Yes, I agree with this. There are definitely costs to both, though I expect on average the problems to be weaker if we privately message people than if we publicly warn them (and that people when polled would prefer to be privately messaged over being publicly warned).
And yes, in some sense the moderators (or at least the admins) are a “corrective authority”, though I think that term doesn’t fully resonate with my idea of what they do, and has some misleading connotations. The admins are ultimately (and somewhat inevitably) the final decision makers when it comes to decide what type of content and discussion and type of engagement the site incentivizes.
We can shape the incentives via modifications to the karma system, or the ranking algorithm, the affordances to users on the site, the moderation and auto-moderation features available to other users, or direct moderation action, but overall, if we end up unhappy (and reflectively unhappy, after sufficient time to consider the pros and cons), then we will make changes to the site to correct that.
I think there are some forms of governance that put us less directly or more directly into the position of a corrective authority, and I do generally prefer to avoid that framing since I think it has some unnecessary adversarial aspects to it. I think the correct strategy is for us to take individual moderator action when we see specific problems or low-frequency problems, and then come up with some kind of more principled solution if the problems happen more frequently (i.e. define transparent site-and-commenting guidelines, make changes to the visibility of various things, changes to the karma system, etc.), though I think individual moderation action from us will be something that I will always want to keep available (and is something that is generally more transparent than other interventions, which I prefer, all else equal).
Much of what you say here is sensible, so this is not really to disagree with your comment, but—I’m not sure my meaning came across clearly, when I said “corrective authority”. I meant it in opposition to what we might call “selective authority” (as in “authority that selects”—as opposed to “authority that corrects”). Though that, too, is a rather cryptic term, I’m afraid… I may try to explain in detail later, when I have a bit more time and have formulated my view on this concisely.
Ah, yes. That changes the framing.
Calling out obvious groupthink and bullshit. Which is depressingly common with increasing regularity.
PDV, I like you and often agree with you and am on your side about the circling thing, so I hope you will take this in the sense it is meant, but I agree with gworley. In particular, I think you often tend to escalate arguments you’re part of, sometimes to the point of transforming them into demon threads even though they didn’t have to be. That’s a good trait to have sometimes: it’s what lets you point out that the emperor has no clothes. But I get the feeling that you’re not deploying it particularly tactically, and you’d probably do better at advancing your goals if you also deescalated sometimes.
If I can be direct since you often are, at least for myself I appreciate that you disagree, but I really dislike the way you do it. In particular you are often unnecessarily confrontational in ways that end up insulting other people on the site or assume bad faith on the part of your interlocutors.
For example you already did this in the comments on this post when you replied to Oli to say he was wrong about his own motivation. I think it’s fair to point out that someone may be mistaken about their own motivations, but you do it in a way that shuts down rather than invites discussion. Whatever your intended effect, it ends up reading like your motivation is to score points, in your own words, “in a way trivially predictable by monkey dynamics”, and makes comments on LW feel like a slightly more hostile place.
I’m in favor of you being able to participate because your comments have at times proved helpful, and calling out that which you disagree with is important for the health of the site, but only if you can do so in a way that leads to productive discussion rather than threatening it.
When you remove someone who says the truth but does so in an inconvenient way, you shift the Overton window of required politeness over toward the “maximally non-confrontational” side of the spectrum. Then, next time, you say the same sort of thing that you’re now saying to PDV, to the next-most-confrontational person. You keep going until no one can say that the emperor has no clothes, except in such oblique terms that a dogwhistle is a bomb siren by comparison.
Is this really what you want?
You might as well say that when you don’t criticize people for saying the truth in an unproductive way, then you shift the Overton window of required politeness over toward the “maximally confrontational” side. Next time, you give a pass to someone who sprinkles their comments with irrelevant insults. You keep going until you’re 4Chan.
Given that spaces other than 4Chan that have disagreements exist, I think it’s possible to put a fence on the slippery slope.
You might indeed say exactly that, which is why it’s important to differentiate between (a) criticizing people, (b) downvoting people, and (c) banning people. (Not that ‘mere’ criticism is problem-free—not at all! But very different dynamics result from these approaches.)
Or, to put it another way: given that spaces other than 4chan that have disagreements exist, we can conclude that people do, indeed, criticize people for saying the truth in an unproductive way.
In short: one person’s modus tollens is another’s modus ponens.
Edit: Or, to put it another way: of course it’s possible to put a fence on the slippery slope. And the way you build that fence is by doing exactly the thing that you’re implying we don’t need to do! (That being “don’t ban commenters who say the truth but in an incovenient way”, in one direction; and “do criticize people for being unnecessarily uncivil”, in the other direction.)
I fail to see why we cannot both speak to what we think is true and do so in a civil way.
We can, of course. We should. But consider:
Too little politeness is clearly unfortunate. Too much politeness is… possibly somewhat annoying? (Certainly not a huge problem.)
Too much truth is an oxymoron (there can never be too much truth; the optimal amount of truth is also the maximum possible amount of truth). Too little truth is catastrophic.
Therefore, to ban truthful people for being insufficiently civil is to court catastrophe; meanwhile, to fail to ban insufficiently civil people who are truthful is… somewhat unfortunate, at best.
Truth, in short, is the object. Civility is an additional desideratum (however important of one it may be). Losing the former makes the latter irrelevant.
This model assumes that truth and politeness are in a simple tradeoff relationship, and if that were true I would absolutely agree that truth is more important. But I don’t think the territory is that simple.
Our goal is not just to maximize the truth on the website at this current moment, but to optimize the process of discovering and sharing truth. One effect of a comment is to directly share some truth, and so removing comments or banning people does, in the short term, reduce the amount of truth produced. However, another effect of a comment is to incentivize or disincentivize other posters, by creating a welcoming or hostile environment. Since those posters may also produce comments that contain truth, a comment can in this way indirectly encourage or discourage the later production of truth.
The downstream effects of the incentivization/disincentivization of comments containing truth will, I think, often swamp the short-term effect of the specific truth shared in the specific comment. (This has some similarities to the long-termist view in altruism.)
This analysis explains why 4chan is not at the forefront of scientific discovery.
When you say you won’t be able to trust it as a source of insight, do you mean 1) that you expect little of what’s posted to LW will be interesting / valuable / insightful, or 2) that it will be dangerous for you yourself to read LW, at the risk of being misled?
I expect content’s prominence on LesserWrong to be the result of political dynamics and filter bubbles, not insight or value. I do not expect it to be truth-tracking.
That is complete. I’m out.