Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category.
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
In short, if someone perceives [...] receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
The other day I decided to talk to the other section about what happens when you don’t understand what is going on. We had been chatting about something or other, and everyone seemed in a relaxed frame of mind, so I said, “You know, there’s something I’m curious about, and I wonder if you’d tell me.” They said, “What?” I said, “What do you think, what goes through your mind, when the teacher asks you a question and you don’t know the answer?”
It was a bombshell. Instantly a paralyzed silence fell on the room. Everyone stared at me with what I have learned to recognize as a tense expression. For a long time there wasn’t a sound. Finally Ben, who is bolder than most, broke the tension, and also answered my question, by saying in a loud voice, “Gulp!”
He spoke for everyone. They all began to clamor, and all said the same thing, that when the teacher asked them a question and they didn’t know the answer they were scared half to death.
I was flabbergasted—to find this in a school which people think of as progressive; which does its best not to put pressure on little children; which does not give marks in the lower grades; which tries to keep children from feeling that they’re in some kind of race. I asked them why they felt gulpish. They said they were afraid of failing, afraid of being kept back, afraid of being called stupid, afraid of feeling themselves stupid.
Stupid. Why is it such a deadly insult to these children, almost the worst thing they can think of to call each other? Where do they learn this? Even in the kindest and gentlest of schools, children are afraid, many of them a great deal of the time, some of them almost all the time. This is a hard fact of life to deal with. What can we do about it?
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
Eliezer himself doesn’t say “Name three examples” every single time
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
In some places on the internet, trolling is or has been a major problem.
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
All right, I’ll give it a try (cc @Said Achmiz).
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
(Separately from my longer reply: I do want to thank you for making the attempt.)
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
Definitely. (As I’ve alluded to earlier in this comment section, I am quite familiar with this problem from the administrator’s side.)
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
Just for the record, your first comment was quite good at capturing some of the models that drive me and the other moderators.
This one is not, which is fine and wasn’t necessarily your goal, but I want to prevent any future misunderstandings.