If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?
(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment:
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don’t involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines.
If you have a sign in a space that says “don’t scream at people” but then lots of people do actually scream at you in that room, this doesn’t actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
My guess is you will respond to this with some statement of the form “but I have said many times that I do not think the norms are such that you have an obligation to respond”, but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
I can also imagine you responding to this with “but I can’t possible create an obligation to respond, the only people who can do that are the moderators”, which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.
I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes.
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.
But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.
So, you say:
I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience.
If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.
The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:
“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”
(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don’t think I am saying particularly complicated things, and I think I’ve communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we’ll continue to take some moderator actions until things look better by our models. I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.
In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.
I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.
You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I’ve been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?
(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)
The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don’t have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn’t helping and isn’t being asked of them.
If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it’s often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren’t aware what you did was a crime.
The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it’s often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.
I’ve now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I’m declaring that we’ll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said’s comments in the Duncan/Said conflict count as a triggering instance).
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it’s not, I don’t really know what to do about that.
No. This is still oversimplifying the issue, which I specifically disclaimed.
Alright, fair enough, so then…
The problem is implicit enforcement of norms.
… but then my next question is:
What the heck is “implicit enforcement of norms”??
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well
To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?
You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.
So, questions:
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?
If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?
A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here “self” refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can’t unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.
Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.
So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)
So there is a norm of responding to criticism. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)
That’s fair, but I predict that the central moderators’ complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.
If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.
(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)
Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.
occasionally reaffirm or remind people that X is not mandatory
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
a case of catering to utility monsters [...] incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
(as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly)
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
I wholly agree.
So it’s things like adopting lsusr’s suggestion to prefer statements to questions.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This might be a point of contention, but honestly, I don’t really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don’t have the time and patience to do this.
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category.
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
‘Examples?’ is one of the rationalist skills most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.
In short, if someone perceives [...] receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
The other day I decided to talk to the other section about what happens when you don’t understand what is going on. We had been chatting about something or other, and everyone seemed in a relaxed frame of mind, so I said, “You know, there’s something I’m curious about, and I wonder if you’d tell me.” They said, “What?” I said, “What do you think, what goes through your mind, when the teacher asks you a question and you don’t know the answer?”
It was a bombshell. Instantly a paralyzed silence fell on the room. Everyone stared at me with what I have learned to recognize as a tense expression. For a long time there wasn’t a sound. Finally Ben, who is bolder than most, broke the tension, and also answered my question, by saying in a loud voice, “Gulp!”
He spoke for everyone. They all began to clamor, and all said the same thing, that when the teacher asked them a question and they didn’t know the answer they were scared half to death.
I was flabbergasted—to find this in a school which people think of as progressive; which does its best not to put pressure on little children; which does not give marks in the lower grades; which tries to keep children from feeling that they’re in some kind of race. I asked them why they felt gulpish. They said they were afraid of failing, afraid of being kept back, afraid of being called stupid, afraid of feeling themselves stupid.
Stupid. Why is it such a deadly insult to these children, almost the worst thing they can think of to call each other? Where do they learn this? Even in the kindest and gentlest of schools, children are afraid, many of them a great deal of the time, some of them almost all the time. This is a hard fact of life to deal with. What can we do about it?
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
Eliezer himself doesn’t say “Name three examples” every single time
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
In some places on the internet, trolling is or has been a major problem.
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
Gwern would be asking for 3 examples
Gwern is strong. You (and Zack) are also strong. Some people are weaker.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
I would appreciate someone else jumping in and explaining those models
I’m super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review of Order Without Law seems relevant. (And the book itself moreso, but that’s less linkable.)
Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)
Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...
I do have something in mind, but I apparently can’t write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that’s unlikely to be helpful. I don’t have any specific sections in mind.
(I think I’m unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)
so what does this model predict about just like the average dinner party or college class
Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).
College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space …
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)
Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)
Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
One last reply:
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.
But it sure isn’t the students!
(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)
The plot point of many high school movies is often about what is and isn’t acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.
Oh, the central latent variable in my uncertainty here is “is anyone willing to do this?” not “is anyone capable of this?”. My honest guess is the answer to that is “no” because this kind of conversation really doesn’t seem fun, and we are 7 levels deep into a 400 comment post.
My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don’t currently expect that to happen.
You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?
No, it boils down to “we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that’s not enough, then I guess that’s life and we’ll move on”.
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn’t involve many additional hours of effort.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate
I do not think you can deny the effort
I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you.
I agree that I personally haven’t put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?
Don’t you think that’s an odd state of affairs, to put it mildly?
The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.
You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.
Does this seem to you like a remotely reasonable way to have rules?
These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don’t think this counts as being defined in “long, branching, deeply nested comment threads about specific moderation decisions”. I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward.
I don’t know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.
The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)
And the rest pretty clearly suggests that there isn’t a clearly defined rule here.
The mod note from 5 years ago seems to me to be very clearly not defining any rules.
Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?
(What is the correct answer?)
How many of their answers would even match one another?
As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]
If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?
(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)
Or rate-limiting, or applying any other such moderation action to.
This is not what I said though.
Now that you’ve clarified your objection here, I want to note that this does not respond to the central point of the grandparent comment:
If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?
Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.
Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution.
But even assuming we did add such a message, there are many other problems:
Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don’t involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines.
If you have a sign in a space that says “don’t scream at people” but then lots of people do actually scream at you in that room, this doesn’t actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.
My guess is you will respond to this with some statement of the form “but I have said many times that I do not think the norms are such that you have an obligation to respond”, but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
I can also imagine you responding to this with “but I can’t possible create an obligation to respond, the only people who can do that are the moderators”, which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.
I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes.
First, concerning the first half of your comment (re: importance of this information, best way of communicating it):
I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”
Have you checked that users understand that they don’t have an obligation to respond to comments?
If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)
Second, concerning the second half of your comment:
Frankly, this whole perspective you describe just seems bizarre.
Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.
But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.
So, you say:
If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.
This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:
“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”
(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)
(I am not planning to engage further at this point.
My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don’t think I am saying particularly complicated things, and I think I’ve communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them.
My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we’ll continue to take some moderator actions until things look better by our models. I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.)
This seems like an odd response.
In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.
In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.
You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!
It’s important for users to know when it comes up. It doesn’t come up much except with you.
(I wrote the following before habryka wrote his message)
While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I’ve been expressing concerns about in this particular discussion.
The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).
But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)
This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)
Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?
(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)
No. This is still oversimplifying the issue, which I specifically disclaimed. Ben Pace gives a sense of it here:
The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it’s often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.
I’ve now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I’m declaring that we’ll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said’s comments in the Duncan/Said conflict count as a triggering instance).
I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it’s not, I don’t really know what to do about that.
Alright, fair enough, so then…
… but then my next question is:
What the heck is “implicit enforcement of norms”??
To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?
You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.
So, questions:
What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?
This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?
If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?
A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here “self” refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can’t unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.
Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.
So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.
(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)
Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)
That’s fair, but I predict that the central moderators’ complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.
If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.
(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)
Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.
If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.
What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!
Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it’s not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone’s head. To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it’s a real thing that happens all the time.
This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.
For example, what lsusr is talking about here is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people’s emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one’s behavior, this leads to more serious issues. It’s worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.
Just so.
In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.
EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.
That’s a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it’s not one-sided.
Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.
I retain my view that to a first approximation, people don’t change.
And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.
I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).
But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.
Well, I’m certainly all for that.
I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.
I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction between “reasonable”/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.
I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)
But is there such a thing…? I find it difficult to imagine what it might be…
I agree that it’s unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won’t have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.
The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!
So it’s things like adopting lsusr’s suggestion to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it’s still not obvious, it either wouldn’t work with more explicit explanation, or it’s my argument’s problem, and then it’s no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.
One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.
I wholly agree.
As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.
I follow just this same heuristic!
Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.
(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.)
I’m gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn’t enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don’t mean capitulation as a target even if the only place “not provoking” happens to lead is capitulation (in reality, or given your model of the situation). My model doesn’t say that this is the case.
The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it’s unclear how efficiently they were put to use (it’s always possible to commit a lot of effort towards measures that won’t help; the effort itself doesn’t rule out availability of something effective). Equivalence between “not provoking” and “capitulation” is a possible conclusion from observing absence of these affordances, or alternatively it’s the reason the affordances remain untapped. It’s hard to tell.
What would any of what you’re alluding to look like, more concretely…?
(Of course I also object to the term “de-escalation” here, due to the implication of “escalation”, but maybe that’s beside the point.)
Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.
Some examples are two comments up, as well as your list of things that don’t work. Another move not mentioned so far is deciding to exit certain conversations.
The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won’t work efficiently (that is, without sacrificing valuable outcomes).
I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)
Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?
Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.
In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it’s not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.
(De-escalation is not even centrally avoidance of individual instances of conflict. I think it’s more important what the popular perception of one’s intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)
Is this really anything like a natural category, though?
Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.
What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?
Or is that in fact the sort of thing you had in mind?
I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?
(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)
Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.
This might be a point of contention, but honestly, I don’t really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).
My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don’t have the time and patience to do this.
All right, I’ll give it a try (cc @Said Achmiz).
Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this “hard enforcement”—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call “soft enforcement”.[1]
Bans are hard enforcement. Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there’s some element of hardness. Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement. Now, compare with Said’s description elsewhere:
Said is clearly aware of hard enforcement and calls that “enforcement”. Meanwhile, what I call “soft enforcement”, he says isn’t anything at all like “enforcement”. One could put this down to a mere difference in terms, but I think there’s a little more.
It seems accurate to say that Said has an extremely thick skin. Probably to some extent deliberately so. This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash (“soft enforcement”) seems to not bother him, perhaps not even register to him. Lots of people would do well to be more like him in this respect.
However, it seems that Said may be unaware of the degree to which he’s different from most people in this[2]. (Either in naturally having a thick skin, or in thinking “this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes”, or something like that.) It seems that Said may be blind to one or more of the below:
That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.
I anticipate a possible objection here: “Well, if I incentivize people to think more rigorously, that seems like a good thing.” At this point the question is “Do Said’s comments enforce any norm at all?”, not “Are Said’s comments pushing people in the right direction?”. (For what it’s worth, my vague memory includes some instances of “Said is asking the right questions” and other instances of “Said is asking dumb questions”. I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.)
Second anticipated objection: “I’m not deliberately trying to enforce anything.” I think it’s possible to do this non-deliberately, even self-destructively. For example, a person could tell their friends “Please tell me if I’m ever messing up in xyz scenarios”, but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc. (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.) Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding “Yeesh. This interaction was not worth it; I won’t bother next time.”
And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)
Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of “If your posts aren’t written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies”, and thus this constitutes soft-enforcing a norm of “writing posts in a certain way”.
Or, regarding the “clarifying questions need replies or else you look ~unrigorous” norm… Actually, technically, I would say that’s not a norm Said enforces; it’s more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies). As Said says elsewhere, it’s just a fact that, if someone asks a clarifying question and you don’t have an answer, there are various possible explanations for this, one of which is “your idea is wrong”.[4] And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said’s questions do promulgate this norm even if they don’t enforce it.
Moreover, this being the website that hosts Be Specific, this norm is stronger here than elsewhere. Which… I do like; I don’t want to make excuses for people being unrigorous or weak. But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category. There’s a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering. My brain generates the story “Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren’t like him in that respect, and isn’t so unusually good at relating to others very unlike him that he’s able to judge the costs accurately; in practice he underestimates the costs and asks too often.”
And usually anything that does (a) also does (b). Removing someone’s ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained. In the physical world, imprisonment is the prototypical example here.
It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it’d be difficult for them to come to common understanding.
There was a time at work where I was running a script that caused problems for a system. I’d say that this could be called the system’s fault—a piece of the causal chain was the system’s policy I’d never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.
In any case, the guy running the system didn’t agree with the goal of my script, and I suspect resented me because of the trouble I’d caused (in that and in some other interactions). I don’t think he had the standing to say I’m forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts. Anyway, this was a side project for me, and I didn’t care enough about it to push through that, so I dropped it. (Whether this was his intent, I’m not sure, but he certainly didn’t object to the result.)
Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like “you think the question is stupid or not worth your time” less plausible, and therefore increases the pressure to reply on someone who doesn’t want to look wrong. (One wonders if Said should wear a jester’s cap or something, or change his username to “troll”. Or maybe Said can trigger a “Name Examples Bot”, which wears a silly hat, in lieu of asking directly.)
(Separately from my longer reply: I do want to thank you for making the attempt.)
I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.
I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.
And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!
There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.
Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).
But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)
As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)
I’ll single out just one last bit:
I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes:
I must confess that I don’t sympathize much with those who object majorly. I feel comfortable with letting conversations on the public internet fade without explanation. “I would love to reply to everyone [or, in some cases, “I used to reply to everyone”] but that would take up more than all of my time” is something I’ve seen from plenty of people. If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I’d roll my eyes, sigh, say to myself “Said is being obtuse”, and move on.
That said, I know that I am also a weird alien. So here is my attempt to describe the others:
“I do reply to every single comment” is a thing some people do, often in their early engagement on a platform, when their status is uncertain. (I did something close to that on a different forum recently, albeit more calculatedly as an “I want to reward people for engaging with my post so they’ll do more of it”.) There isn’t really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
I also do feel some pressure to reply if the commenter is a friend I see in person—that it’s a little awkward if I don’t. This presumably doesn’t apply here.
I think some people have a self-image that they’re “polite”, which they don’t reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being “polite” means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he “has to” feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
If the conversation begins well enough, that may create more of a politeness obligation in some people’s heads. The fact that someone had to create the term “tapping out” is evidence that some people’s priors were that simply dropping the conversation was impolite.
Looking at what’s been said, “frustration” is mentioned. It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you’ll say “Aha, I understand, thank you”; they’ll be pleased with this result), and if instead it leads to several levels of “I don’t understand, please explain further” before they finally give up, then they may be disappointed ex post. Particularly if they’ve never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended. Then they come away from the experience thinking, “I posted, and I ended up in a long interaction with Said, and wow, that sucked. Not eager to do that again.”
It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
In some places on the internet, trolling is or has been a major problem. Making someone do a bunch of work by repeatedly asking “Why?” and “How do you know that?”, and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people. (Some of my friends who like to tease have occasionally done that.) If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it’s deliberate. And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.
(I personally tend to welcome the challenge of explaining myself, because I’m proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not. Perhaps some people have memories of being tripped up and embarrassed. Such people should get over it, but given that not all of them have done so… we shouldn’t bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.
I find this hard to relate to—I’m extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with “Ooh, I hope so, I wish that were so! (But I doubt it!)”; if someone comes away thinking I’m stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is). I suspect your background resembles mine in this respect.
But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they’re wrong (and backs it up). (To some extent this may be due to authority-keeping issues.) I hear that often kids in school are really afraid of being called, or shown to be, stupid. John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:
(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high (relative to their peers in their formative years), so this would be a self-censoring fear. I don’t think I’ve seen anyone mention intellectual insecurity in connection to this whole discussion, but I’d say it likely plays at least a minor role, and plausibly plays a major role.)
Again, if school traumatizes people into having irrational fears about this, that’s not a good thing, it’s the schools’ fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven’t gotten over it, it is useful to know this, and it’s justifiable to put some effort into accommodating them. How much effort is up for debate.
My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.
Gwern is strong. You (and Zack) are also strong. Some people are weaker. One could design a forum that made zero accommodations for the weak. The idea is appealing; I expect I’d enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts. I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died. One could argue that, even if that’s true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable. Maybe. One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.
As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan, he says that most people are not “Keepers” (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they’re dealing with, etc.), that most people are not fit to be Keepers, and that it’s fine and good that they don’t hold themselves to that standard. Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum. Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum. If LW is to be the Keeper forum, where are the Keepers trained? The SSC subreddit? Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?
I don’t know. It could be the right idea. I would give it… 25%?, that this is better than some more civilian-accommodating thing like what we have today. I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team. (I also note that, if we manage to do something like enhance the overall population’s intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.) And no, I don’t think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.
Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:
To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)
However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)
“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)
What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.
Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.
It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).
Definitely. (As I’ve alluded to earlier in this comment section, I am quite familiar with this problem from the administrator’s side.)
But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)
In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.
As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).
There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”
I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.
First: however relevant this worry may have been once, it’s hardly relevant now.
This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)
The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)
Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.
I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.
Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.
But even taking all of that for granted—you still haven’t solved the fundamental problems.
Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.
Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)
No, this is just confused.
Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.
There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.
But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.
Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.
I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.
Just for the record, your first comment was quite good at capturing some of the models that drive me and the other moderators.
This one is not, which is fine and wasn’t necessarily your goal, but I want to prevent any future misunderstandings.
I’m super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review of Order Without Law seems relevant. (And the book itself moreso, but that’s less linkable.)
I do recall reading and liking that post, though it’s been a while. I will re-read it when I have the chance.
But for now, a quick question: do you, in fact, think that the model described in that post applies here, on Less Wrong?
(If this starts to be effort I will tap out, but briefly:)
It’s been a long time since I read it too.
I don’t think there’s a specific thing I’d identify as “the model described in that post”.
There’s a hypothesis that forms an important core of the book and probably the review; but it’s not the core of the reason I pointed to it.
I do expect bits of both the book and the review apply on LW, yes.
Well, alright, fair enough.
Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)
Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...
I do have something in mind, but I apparently can’t write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that’s unlikely to be helpful. I don’t have any specific sections in mind.
(I think I’m unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)
Alright, no worries.
Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).
College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).
(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)
I, too, am capable of describing such a model.
But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.
On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.
Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).
Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)
Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)
Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.
That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.
One last reply:
Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.
Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.
Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.
But it sure isn’t the students!
(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)
Have you been to an American high school and/or watched at least one movie about American high schools?
I have done both of those things, yes.
EDIT: I have also attended not one but several (EDIT 2: four, in fact) American colleges.
The plot point of many high school movies is often about what is and isn’t acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.
I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.
Oh, the central latent variable in my uncertainty here is “is anyone willing to do this?” not “is anyone capable of this?”. My honest guess is the answer to that is “no” because this kind of conversation really doesn’t seem fun, and we are 7 levels deep into a 400 comment post.
My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don’t currently expect that to happen.
You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?
No, it boils down to “we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that’s not enough, then I guess that’s life and we’ll move on”.
Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn’t involve many additional hours of effort.
Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?
I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.
The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.
It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you.
I agree that I personally haven’t put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements.
But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?
Don’t you think that’s an odd state of affairs, to put it mildly?
The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.
You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.
Does this seem to you like a remotely reasonable way to have rules?
But note that this, famously, is no longer true in our society today, which does indeed have some profoundly unjust consequences.
I think we’ve tried pretty hard to communicate our target rules in this post and previous ones.
The best operationalization of them is in this comment, as well as the moderation warning I made ~5 years ago: https://www.lesswrong.com/posts/9DhneE5BRGaCS2Cja/moderation-notes-re-recent-said-duncan-threads?commentId=y6AJFQtuXBAWD3TMT
These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don’t think this counts as being defined in “long, branching, deeply nested comment threads about specific moderation decisions”. I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward.
We are also thinking about how to think about having site-wide moderation norms and rules that are more canonical, though I share Ruby’s hesitations about that: https://www.lesswrong.com/posts/gugkWsfayJZnicAew/should-lw-have-an-official-list-of-norms
I don’t know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.
EDIT: Why do my comments keep double-posting? Weird.
… that comment is supposed to communicate rules?!
It says:
The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)
And the rest pretty clearly suggests that there isn’t a clearly defined rule here.
The mod note from 5 years ago seems to me to be very clearly not defining any rules.
Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?
(What is the correct answer?)
How many of their answers would even match one another?
Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]
Have there been societies in the past which have worked like this? I don’t know. Maybe we can ask David Friedman?