Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
Since I feel like these kinds of discussions can often feel thankless, I felt like I wanted to write an explicit comment saying I am grateful for @1a3orn’s, @JohnofCharleston’s, @Thomas Kwa’s and @Alexander Gietelink Oldenziel’s comments on this thread. I disagree with many of you, but you presented a bunch of good arguments evidence on a thing that does actually seem quite important for the future of the world.
I don’t currently like the muted comment system for many practical reasons, though I like it as an idea!
We could go into the details of it, but I feel a bit like stuff is getting too anchored on that specific proposal, and explaining why I don’t feel excited about this one specific solution out of dozens of ways of approaching this feels like it would both take a long time, and not really help anyone. Though if you think you would find it valuable I could do it.
Let me know if you want to go there, and I could write more. I am pretty interested in discussing the general principles and constraints though, I’ve just historically not gotten that much out of discussions where someone who hasn’t been trying to balance a lot of the complicated design considerations comes in with a specific proposal, but have gotten a lot of value out of people raising problems and considerations (and overall appreciate your thoughts in this thread).
Are you referring to yourself and the LW team here?
Nope, I think we have plenty of authority. I was here referring to authors trying to maintain any kind of discussion quality in the absence of our help, and unfortunately, we are very limited in the amount of moderation we can do, as it already takes up a huge fraction of our staff time.
The culture that matters is one that does not unilaterally cede control to authors over who is allowed to point out their errors or how they can do that.
Yes, we all agree on that. Posts are a great tool for pointing out errors with other posts, as I have pointed out many times. Yes, comments are also great, but frequently discussions just go better if you move them to the top level, and the attention allocation mechanisms work so much better.
Also de-facto people just almost never ban anyone else from their posts. I agree we maybe should just ban more people ourselves, though it’s hard and I prefer the world where instead of banning someone like Said site-wide, we have a middle ground where individual authors who are into his style of commenting can still have him around. But if there is only one choice on this side, then clearly I would ban Said and other people in his reference class, as this site would quickly fall into something close to full abandonment if we did not actively moderate that.
Like, I don’t believe you that you want the site moderators to just ban many more people from the whole site. It just seems like a dumb loss for everyone.
When there is repeated as opposed to one-off conflict between users is precisely the time for a neutral outsider to step in, instead of empowering one of the sides to unilaterally cut off the other.
Ok, then tell me, what do you propose we do when people repeatedly get into unproductive conversations, usually generated by a small number of users on the site? Do you want us to just ban them from the site in general? Many times they have totally fine interactions with many sub-parts of the site, they just don’t get along with some specific person. Empowering the users who have a history of contributing positively to the site (or at least a crude proxy of that in the form of karma) to have some control their own seems like the most economical solution.
We could also maintain a ban list where authors can appeal to us to ban a user from their posts, though honestly, I think there are almost no bans I would not have approved this way. I agree that if we had lots of authors who make crazy to me seeming bans then we should change something about this system, but when I look at the register of bans, I don’t think I see approximately any ban where it to me as a moderator does indeed not just seem better for these people to keep distance from each other.
I would believe this iff banned users were nonetheless allowed (by moderator fiat) to type up a comment saying “I have written a response to this post at [insert link],” which actually shows up in the comment section of the original post.
Oh, also just for the record, we have a pingback section at the bottom of every post, above the comment section, which basically achieves exactly this. If you write a popular response to a post, it will show up right below the post for anyone to see!
(Just for the historical record, there is a moderation log visibile at lesswrong/moderation which does this, though it’s not the single most beautiful page on the site)
The place where it gets displayed is below the comment box when you start typing something:
It’s confusing for it to say “profile”. It should ideally say “user settings”, as the goal of that sentence was to explain to authors where they can set these and not to explain to readers where to find these. I’ll edit it.
I agree it’s working fine for that specific comment thread! But it’s just not really true for most posts, which tend to have less than 10 comments, and where voting activity after 1-2 rounds of replies gets very heavily dominated by the people who are actively participating in the thread, which especially as it gets heated, causes things to then end up being very random in their vote distribution, and to not really work as a signal anymore.
The popular comments section is affected by net karma, though I think it’s a pretty delicate balance. My current guess is that indeed the vast majority of people who upvoted Said’s comment didn’t read Gordon’s post, and upvoted Said comment because it seemed like a dunk on something they didn’t like, irrespective of whether that actually applied to Gordon’s post in any coherent way.
I think the popular comment section is on-net good, but in this case seems to me to have failed (for reasons largely unrelated to other things discussed in this thread), and it has happened a bunch of times that it promoted contextless responses to stuff that made the overall discussion quality worse.
Fundamentally, the amount of sorting you can do in a comment section is just very limited. I feel like this isn’t a very controversial or messy point. On any given post you can sort maybe 3-4 top-level threads into the right order, so karma is supplying at most a few bits of prioritization for the order.
In the context of post lists, you often are sorting lists of posts hundred of items long, and karma is the primary determinant whether something gets read at all. I am not saying it has absolutely no effect, but clearly it’s much weaker (and indeed, does absolutely not reliably prevent bad comments from getting lots of visibility and does not remotely reliably cause good comments to get visibility, especially if you wade into domains where people have stronger pre-existing feelings and are looking for anything to upvote that looks vaguely like their own side, and anything to downvote that looks vaguely like the opposing side).
I think the group that is missing the most are other active commenters. Maybe you meant to include them in “authors” but I think it makes sense to break them out.
The thing that IMO burns people the most is trying to engage in good faith with someone, or investing a lot of effort into explaining things, only to end up in a spot where they feel like that work ended up being mostly used against them in weird social ways, or their reward for sticking their head out and saying anything was being met with sneering. This applies to authors, but it also applies a lot to commenters.
One of the most central value propositions that makes people want to be on LW instead of the rest of the internet is the fact they don’t have to do moderation themselves. Commenters want to participate in curated environments. Commenters love having people engage with what they say with serious engagement (even if combined with intense disagreement) and hate having things rudely dismissed or sneered at.
Commenters hate having to do norm enforcement themselves, especially from an authority-less position. Indeed, maybe the most common problem I hear from people about other forums, as well as community engagement more general, is that they feel like they end up spending a lot of their time telling other people to be reasonable, and then this creates social drama and a feeling of needing to rile up troops via a badly codified mechanism of social enforcement, and this is super draining and exhausting, and so they leave. Moderator burnout is also extremely common, especially when the moderators do not feel like they get to act with authority.
By having both the LW moderation team do lots of active opinionated moderation, and allowing authors to do the same, we can create spaces that are moderated and can have any kind of culture, as opposed to just what happens by default on the internet. Realistically, you cannot settle norm disputes in individual comment threads, especially not with a rotating cast of characters on each post, so you need a mixture of authors and site-moderators work on codifying norms and think hard about cultural components.
I think there is something in the space, but I wouldn’t speak in absolutes this way. I think many bad things deserve to be pushed down. I just don’t think Said has a great track record of pushing down the right things, and the resulting discussions seem to me to reliably produce misunderstandings and confusions.
I think a major thing that I do not like is “sneering”. Going into the cultural context of sneering and why it happens and how it propagates itself is a bit much for this comment thread, but a lot of what I experience from Said is that kind of sneering culture, which interfaces with having standards, but not in a super clear directional way.
(I did not see any corresponding “Rats for Harris” or “EAs for Harris” posts; maybe that’s a selection effect problem on my end?)
Are you somehow implying the community isn’t extremely predominantly left? If I remember the stats correctly, for US rationalists, it’s like 60% democrats, 30% libertarians, <10% republicans. The reason why nobody wrote a “Rats for Harris” post is because that would be a very weird framing with the large majority of the community voting pretty stably democratic.
(I do appreciate the attempt at trying to bridge the epistemic gap, but just to be clear, this does not capture the relevant dimensions in my mind. The culture I want on LessWrong is highly competitive in many ways.
I care a lot about having standards and striving in intense ways for the site. I just don’t think the way Said does it really produces that, and instead think it mostly produces lots of people getting angry at each other while exacerbating tribal dynamics.
The situation seems more similar to having a competitive team where anyone gets screamed at for basically any motion, with a coach who doesn’t themselves perform themselves, but just complaints in long tirades any time anyone does anything, making references to methods of practice and training long-outdated, with a constant air of superiority. This is indeed a common error mode for competitive sports teams, but the right response to that is not to not have standards, it’s to have good standards and to most importantly have some functional way of updating the standards.)
and is much weaker than what I thought Ben was arguing for.
I don’t think Ryan (or I) was intending to imply a measure of degree, so my guess is unfortunately somehow communication still failed. Like, I don’t think Ryan (or Ben) are saying “it’s OK to do these things you just have to ask for consent”. Ryan was just trying to point out a specific way in which things don’t bottom out in consequentialist analysis.
If you end up walking away with thinking that Ben believes “the key thing to get right for AI companies is to ask for consent before building the doomsday machine”, which I feel like is the only interpretation of what you could mean by “weaker” that I currently have, then I think that would be a pretty deep misunderstanding.
I don’t think this really tracks. I don’t think I’ve seen many people want to “become part of the political right”, and it’s not even the case that many people voted for republicans in recent elections (indeed, my guess is fewer rationalists voted for republicans in the last three elections than previous ones).
I do think it’s the case that on a decade scale people have become more anti-left. I think some of that is explained by background shift. Wokeness is on the decline, and anti-wokeness is more popular, so baserates are shifting. Additionally, people tend to be embedded in coastal left-leaning communities, so they develop antibodies against wokeness.
Maybe this is what you were saying, but “out of sight, out of mind” implies a miscalibration about attitudes on the right here, where my sense is people are mostly reasonably calibrated about anti-intellectualism on the right, but approximately no one was considering joining that part of the right, or was that threatened by it on a personal level, and so it doesn’t come up very much.
But—surely—China has a functioning market economy, where you can incentivize things by paying for them? Sure it’s “Communist” but it’s not communist like that.
I have lots of uncertainty about this! For example, it does appear that China basically gutted its software startup industry a few months ago, and this is really costly, and it wouldn’t surprise me if this will have large negative effects on Chinese drone effectiveness, since software seems like a non-trivial fraction of the difficulty, especially for coordinating drone swarms.
My current model is that overall, all things considered, the Chinese market economy is a lot weaker. This doesn’t mean there are no domains where China excells at building great products in their market economy, but I have a much higher likelihood that something will mess up their efforts to do something in the market than I have for the U.S.. IDK, I am at like 65% that the US market economy is sufficiently stronger here to produce a long-run advantage in drone manufacturing if the US government decides to spend heavily on it, which really isn’t that confident.
Can you give some examples of such people? (Are you one of them?)
My guess is something like more than half of the authors to this site who have posted more than 10 posts that you commented on, about you, in particular. Eliezer, Scott Alexander, Jacob Falkovich, Elizabeth Van Nostrand, me, dozens of others. This is not a rare position. I would have to dig to give you an exact list, but the list is not short, and it includes large fractions of almost everyone who one might consider strong contributors to the site.
We have had this conversation many times. I have listed examples of people like this in the past. If you find yourself still incapable of modeling more than 50% of top authors on the site whose very moderation guidelines you are opining on, after many many many dozens of hours of conversation on the topic, maybe you should just stay out of these conversations, as you are clearly incapable of modeling the preferences of the majority of people who would be affected by your suggested changes to the moderation guidelines.
A good start, if you actually wanted to understand any of this at all, would be to stop strawmaning these people repeatedly by inserting random ellipses and question marks and random snide remarks implying the absurdity of their position. Yes, people have preferences about how people interact with them that go beyond obvious unambigious norm violations, what a shocker! Yes, it is of course completely possible to be hostile in a plausible deniable way. Indeed, the most foundational essay for the moderation guidelines on this site, mentions this directly (emphasis mine):
Somewhere in the vastness of the Internet, it is happening even now. It was once a well-kept garden of intelligent discussion, where knowledgeable and interested folk came, attracted by the high quality of speech they saw ongoing. But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Well-kept gardens do not tend to die by accepting obviously norm-violating content. They usually die by people being bad discourse participants in plausible deniable ways, just kind of worse, but not obviously and unambiguously worse, than what has come before. This is moderation 101. Yes, of course authors, and everyone else, will leave, if you fill a space with people just kind of being bad discourse participants, even if they don’t do anything egregious. How could reality work any other way.
If the author doesn’t trust the community to vote bad takes down into less visibility, when they have no direct COI, why should I trust the author to do it unilaterally, when they do? Writing great content doesn’t equate to rationality when it comes to handling criticism.
Comments almost never get downvoted. Most posts don’t get that many comments. Karma doesn’t reliably work as a visibility mechanism for comments.
I do think a good karma system would take into account authors banning users, or some other form of author feedback, and then that would be reflected in the default visibility of their comments. I don’t have a super elegant way of building that into the site, and adding it this way seems better than none at all, since it seems like a much stronger signal than normal aggregate upvotes and downvotes.
I do think the optimal version of this would somehow leverage the karma system and all the voting information we have available into making it so that authors have some kind of substantial super-vote to control visibility, but that is balanced against votes by other people if they really care about it, but we don’t have that, and I don’t have a great design for it that wouldn’t be super complicated.
Overall, I think you should model karma as currently approximately irrelevant for managing visibility of comments due to limited volume of comments, and the thread structure making strict karma sorting impossible, so anything of the form “but isn’t comment visibility handled by the karma system” is basically just totally wrong, without substantial changes to the karma system and voting.
On the other hand karma is working great for sorting posts and causing post discoverability, which is why getting critics to write top-level posts (which will automatically be visible right below any post they are criticizing due to our pingback system) is a much better mechanism for causing their content to get appropriate attention. Both for readers of the post they are criticizing, and for people who are generally interested in associated content.
I am not sure I am understanding your proposal then. If you want people to keep track of two conversations with different participants, you need the comment threads to be visually separated. Nobody will be able to keep track of who exactly is muted when in one big comment thread, so as long as this statement is true, I can’t think of any other way to implement that but to move things into fully separate sections:
So conversation 1 is all the unflagged comments, and conversation 2 is all the flagged comments.
And then the whole point of having a section like this (in my mind) is to not force the author to give a platform to random bad takes proportional to their own popularity without those people doing anything close to proportional work. The author is the person who attracted 95% of the attention in the first place, almost always by doing good work, and that control is the whole reason why we are even considering this proposal, so I don’t understand what is to gain by doing this and not moving it to the bottom.
In general it seems obvious to me that when someone writes great content this this should get them some control over the discussion participants and culture of the discussion. They obviously always have that as a BATNA by moving to their own blog or Substack or wherever, and I certainly don’t want to contribute to a platform and community that gives me no control over its culture, given that the BATNA of getting to be part of one I do get to shape is alive and real and clearly better by my lights (and no, I do not consider myself universally conflicted out of making any kind of decision of who I want to talk to or what kind of culture I want to create, I generally think discussions and cultures I shape are better than ones I don’t, and this seems like a very reasonable epistemic state to me).
We’ve considered something similar to this, basically having two comment sections below each post, one sorted to the top one sorted to the bottom. And authors can move things to the bottom comment section, and there is a general expectation that authors don’t really engage with the bottom comment section very much (and might mute it completely, or remove author’s ability to put things into the top comment section).
I had a few UI drafts of this, but nothing that didn’t feel kind of awkward and confusing.
(I think this is a better starting point than having individual comment threads muted, though explaining my models would take a while, and I might not get around to it)
My guess is most recommendation engines in use these days are ML/DL based. At least I can’t think of any major platform that hasn’t yet switched over, based on what I read.
In my mind things aren’t neatly categorized into “top N reasons”, but here are some quick thoughts:
(I.) I am generally very averse to having any UI element that shows on individual comments. It just clutters things up quickly and requires people to scan each individual comment. I have put an enormous amount of effort into trying to reduce the number of UI elements on comments. I much prefer organizing things into sections which people can parse once, and then assume everything has the same type signature.
(II.) I think a core thing I want UI to do in the space is to hit the right balance between “making it salient to commenters that they are getting more filtered evidence” and “giving the author social legitimacy to control their own space, combined with checks and balances”.
I expect this specific proposal to end up feeling like a constant mark of shame that authors are hesitant to use because they don’t feel the legitimacy to use it, and most importantly, make it very hard for them to get feedback on whether others judge them for how they use it, inducing paranoia and anxiety, which I think would make the feature largely unused. I think in that world it isn’t really helping anyone, though it will make authors feel additionally guilty by having technically handed them a tool for the job, but one that they expect will come with social censure after being used, and so we will hear fewer complaints and have less agency to address the underlying problems.
(III.) I think there is a nearby world where you have n-directional muting (i.e. any user can mute any other user), and I expect people to conceptually confuse that with what is going on here, and there is no natural way to extend this feature into the n-directional direction.
I generally dislike n-directional muting for other reasons, though it’s something I’ve considered over the years.
(IV.) I think it’s good to have different in-flows of users into different conversations. I think the mute-thread structure would basically just cause every commenter to participate in two conversations, one with the author, and one without the author, and I expect that to be a bunch worse than to have something like two separate comment sections, or a top-level response post where the two different conversations can end up with substantially non-overlapping sets of participants.
(V.) The strongest argument against anything in the space is just the complexity it adds. The ban system IMO currently is good because mostly you basically don’t have to track it. Almost nobody ever gets banned, but it helps with the most extreme cases, and the moderators keep track of things not getting out of control with lots of unreasonable seeming bans. Either this or an additional comment section, or any of the other solutions discussed is one additional thing to keep track off for how LessWrong works, and there really is already a lot to keep track off, and we should have a very very strong prior that we should generally not add complexity but remove it.
(VI.) Relatedly, I think the mark of a good feature on LessWrong is something that solves multiple problems at once, not just one problem. Whenever I’ve felt happy about a feature decision it’s usually been after having kept a bag of problems in the back of my mind for many months or years, and then at some point trying to find a solution to a new problem, and noticing that it would also solve one or multiple other problems at the same time. This solution doesn’t have that hallmark, and I’ve mostly regretted whenever I’ve done that, ending up adding complexity to both the codebase and the UI that didn’t pay off.