At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you’re [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You’ve spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You’ve written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don’t think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you’d be much happier with my ideal, you’d think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I’m really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a “this place is really under control” kind of ideal. I take responsibility for it not being so, and I’m sorry. I wouldn’t blame you for saying this isn’t good enough and wanting to leave[1], there are some pretty bad flaws.
But sir, you impugn my and my site’s honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don’t work 80-hour weeks, I feel like I put a tonne of my soul into this site.
Over the last year, I’ve been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we’re still failing at this. On many days, the Frontpage is 90+% AI posts. It’s not been a trivial problem for many problems.
The other existential problem, beyond the topic, that I’ve been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It’s not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you’ll be able to view the content we didn’t go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document that describes my sense of priorities for the team.
I think the discourse norms and bad behavior (and I’m willing to say now in advance of my more detailed thoughts that there’s a lot of badness to how Said behaves) are also serious threats to the site, and we do give those attention too. They haven’t felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you’ve done us a service to draw our attention to behavior we should be deeming intolerable, and it’s easily 50-100 hours of team attention.
It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I’ll claim that it was definitely not an obvious mistake that we haven’t addressed the problems you’re most focused on.
It is actually on my radar and I’ve been actively wanted for a while a system that reliably gets the mod team to show up and say “cut it out” sometimes. I suspect that’s what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say “Duncan, we the mods certify that if you disengage, it is no mark against you” or something. I’m not sure. Ray mentioned the concept of “Maslow’s Hierarchy of Moderation” and I like that idea, and would like to get soon to the higher level where we’re actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I’m doing to pivot when these threads come up, perhaps I should work on that.
I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven’t (or why Lightcone as a whole didn’t keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it’d be bad to hire right now.
All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn’t meet your standards or not how you’d do it, perhaps we’re not all that competent, fair enough.
(And also you’ve had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven’t adopted it wholesale, and make use of many of the concepts you’ve crystallized. For that I am grateful. Those are contributions I’d be sad to lose, but I don’t want to push you to offer to them to us if doing so is too costly for you.)
I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren’t working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.
This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.
A [less straightforwardly wrong and unfair] phrasing would have been something like “this is not a Japanese tea garden; it is a British cottage garden.”
I probably rushed this comment out the door in a “defend my honor, set the record straight” instinct that I don’t think reliably leads to good discourse and is not what I should be modeling on LessWrong.
At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you’re [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You’ve spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse. You’ve written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.
I don’t think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you’d be much happier with my ideal, you’d think it was pretty good if not perfect. Respectable, maybe adequate. A garden.
And I’m really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a “this place is really under control” kind of ideal. I take responsibility for it not being so, and I’m sorry. I wouldn’t blame you for saying this isn’t good enough and wanting to leave[1], there are some pretty bad flaws.
But sir, you impugn my and my site’s honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don’t work 80-hour weeks, I feel like I put a tonne of my soul into this site.
Over the last year, I’ve been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we’re still failing at this. On many days, the Frontpage is 90+% AI posts. It’s not been a trivial problem for many problems.
The other existential problem, beyond the topic, that I’ve been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It’s not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you’ll be able to view the content we didn’t go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document that describes my sense of priorities for the team.
I think the discourse norms and bad behavior (and I’m willing to say now in advance of my more detailed thoughts that there’s a lot of badness to how Said behaves) are also serious threats to the site, and we do give those attention too. They haven’t felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you’ve done us a service to draw our attention to behavior we should be deeming intolerable, and it’s easily 50-100 hours of team attention.
It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I’ll claim that it was definitely not an obvious mistake that we haven’t addressed the problems you’re most focused on.
It is actually on my radar and I’ve been actively wanted for a while a system that reliably gets the mod team to show up and say “cut it out” sometimes. I suspect that’s what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say “Duncan, we the mods certify that if you disengage, it is no mark against you” or something. I’m not sure. Ray mentioned the concept of “Maslow’s Hierarchy of Moderation” and I like that idea, and would like to get soon to the higher level where we’re actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I’m doing to pivot when these threads come up, perhaps I should work on that.
I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven’t (or why Lightcone as a whole didn’t keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it’d be bad to hire right now.
All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn’t meet your standards or not how you’d do it, perhaps we’re not all that competent, fair enough.
(And also you’ve had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven’t adopted it wholesale, and make use of many of the concepts you’ve crystallized. For that I am grateful. Those are contributions I’d be sad to lose, but I don’t want to push you to offer to them to us if doing so is too costly for you.)
I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren’t working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.
Something like the core identity of LessWrong is rationality. In alternate worlds, that is the same, but the major topic could be something else.
Over the weekend, some parts of the reviewing get deferred till the work week.
This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.
A [less straightforwardly wrong and unfair] phrasing would have been something like “this is not a Japanese tea garden; it is a British cottage garden.”
I have been to the Japanese tea garden in Portland, and found it exquisite, so I think get your referent there.
Aye, indeed it is not that.
I probably rushed this comment out the door in a “defend my honor, set the record straight” instinct that I don’t think reliably leads to good discourse and is not what I should be modeling on LessWrong.