Great post, really like the structure and really really appreciate the transparency.
However, the sections about critics did leave me with a bad taste in my mouth. I am pretty sure you aren’t including me in the set of uncharitable/wrongheaded critics, but I am not fully confident and I think this is kind of the problem. I imagine many of the people who had valid concerns and good criticism to feel like they are being dismissed, or at least assign a significant probability to it, and that this will have a significant chilling effect on good future criticism, which seems bad for everyone.
I think you definitely faced a lot of unfair criticism and verbal abuse, in a form that we would pretty heavily punish on the new LessWrong (man, that numbers guy), but I think it’s better to try to just ignore those critics, or be very careful to not accidentally use broad enough brushstrokes to also paint many of the good critics in the same light. I know you tried to avoid this by putting in explicit disclaimers, but it’s very hard from the inside to assess whether you think my criticism was constructed in good faith, and even on reflection I am not actually sure whether you think it was, and I expect others feel the same.
I want to emphasize again though that I am very happy you wrote this, and that overall this post has been quite valuable for me to read, and I am interested in chatting with you and others more about future iterations of Dragon Army and how to make it work. Keep up the good work.
I think it’s important to have public, common-knowledge deterrence of that sort of behavior. I think that part of what allowed it to flourish on LessWrong 1.0 is the absence of comments like my parenthetical, making it clear that that sort of thing is outside the Overton window. I claim that there were not enough defenders in the garden, and I think that a lot of LW is too unwilling to outgroup and disincentivize behavior that no one wants to see, because of something like not wanting to seem illiberal or close-minded or unwilling-to-rationally-consider-the-possibility-that-they’re-wrong.
I recognize that this is a place where we disagree, and where indeed you could easily turn out to be more correct than me. But that parenthetical was carefully and deliberately constructed over the course of weeks, with me spending more than two full hours on its wording once you add up all the musing and tinkering and consideration of the consequences. It was Very Much On Purpose, and very much intended to have cultural effects.
I predict that the chilling effects on good criticism will be smaller-enough than the chilling effects on bad criticism that it’s net worthwhile, in the end. In particular, I think the fact that you had difficulty telling whether I was referring to you or not is a feature, not a bug—as a result, you booted up your metacognition and judging/evaluatory algorithms, and users doing that as a matter of habit is one of my hopes/cruxes for LessWrong. We don’t need crippling anxiety, but if I could push a button to make LW commenters 3% more self-consicous or 3% less self-conscious (3% more reflective and deliberate and self-evaluatory and afraid-of-their-own-biases-leaking-through or 3% less so) and it had to be across-the-board as opposed to distinguishing between different subsets of people … I know which I’d choose.
(I imagine you don’t want a long back-and-forth about this here; I would be willing to contribute to a discussion on this in Meta if you want.)
The weatherman who predicts a 20% chance of rain on a sunny day isn’t necessarily wrong. Even the weatherman who predicts 80% chance of rain on a sunny day isn’t *necessarily* wrong.
If there’s a norm of shaming critics who predict very bad outcomes, of the sort “20% chance this leads to disaster”, then after shaming them the first four times their prediction fails to come true, they’re not going to mention it the fifth time, and then nobody will be ready for the disaster.
I don’t know exactly how to square this with the genuine beneficial effects of making people have skin in the game for their predictions, except maybe for everyone to be more formal about it and have institutions that manage this sort of thing in an iterated way using good math. That’s why I’m glad you were willing to bet me about this, though I don’t know how to solve the general case.
If there’s a norm of shaming critics who predict very bad outcomes
I think it is hugely important to point out that this is not the norm Duncan is operating under or proposing. I understand Duncan as saying “hey, remember those people who were nasty and uncharitable and disgusted by me and my plans? Their predictions failed to come true.”
Like, quoting from you during the original discussion of the charter:
I would never participate in the linked concept and I think it will probably fail, maybe disastrously.
But I also have a (only partially endorsed) squick reaction to the comments against it. I guess I take it as more axiomatic than other people that if people want to try something weird, and are only harming themselves, that if you make fun of them for it, you’re a bully.
Of course, here, “the comments against it” isn’t ‘anyone who speaks about against the idea.’ One can take jbeshir’s comment as an example of someone pointing directly at the possibility of catastrophic abuse while maintaining good discourse and epistemic norms.
---
I note that I am generally not a fan of vaguebooking / making interventions on the abstract level instead of the object level, and if I were going to write a paragraph like the one in the OP I would have named names instead of making my claims high-context.
Agreed that some people were awful, but I still think this problem applies.
If somebody says “There’s a 80% chance of rain today, you idiot, and everyone who thinks otherwise deserves to die”, then it’s still not clear that a sunny day has proven them wrong. Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
To be clear, I agree with this. Furthermore, while I don’t remember people giving probability distributions, I think it’s fair to guess that critics as a whole (and likely even the irrational critics) put higher probability on the coarse description of what actually happened than Duncan or those of us that tried the experiment, and that makes an “I told you so!” about assigning lower probability to something that didn’t happen hollow.
I agree with this. Perhaps a better expression of the thing (if I had felt like it was the right spot in the piece to spend this many words) would’ve been:
they were systematically wrong then, in loudly espousing beliefs whose truth value was genuinely in question but for which they had insufficient justification, and wrong in terms of their belongingness within the culture of a group of people who want to call themselves “rationalists” and who care about making incremental progress toward actual truth, and I believe that the sacrifice of their specific, non-zero, non-useless data and perspective is well worth making to have the correct walls around our garden and weeding heuristics within it. And I see no reason for that to have changed in the intervening six months.
I suspect that coming out of the gate with that many words would’ve pattern-matched to whining, though, and that my specific parenthetical was still stronger once you take into account social reality.
I’m curious if you a) agree or disagree or something-else with the quote above, and b) agree or disagree or something-else with my prediction that the above would’ve garnered a worse response.
The problem is absolutely not that people were predicting very bad outcomes. People on Tumblr were doing things like (I’m working from memory here) openly speculating about how incredibly evil and sick and twisted Duncan must be to even want to do anything like this, up to something like (again, working from memory here) talking about conspiring to take Duncan down somehow to prevent him from starting Dragon Army.
As someone who didn’t follow the original discussions either on Tumblr or LW, this was totally unclear from Duncan’s parenthetical remark in the OP. So I think for the purpose of “common-knowledge deterrence of that sort of behavior” that section totally failed, since lots of people must have, like Scott and I, gotten wrong ideas about what kind of behavior Duncan wanted to deter.
A part of my model here is that it’s impossible from a social perspective for me to point these things out explicitly.
I can’t describe the dynamic directly (my thinking contains some confusion) so I’ll point out an analogous thing.
Alex and Bradley have had a breakup.
Alex is more destabilized than Bradley, by the breakup—to the point that Alex finds it impossible to occupy the same space as Bradley. This is not a claim about Bradley being bad or in the wrong or responsible (nor the opposite); it’s just a brute fact about Alex’s emotional state.
There’s an event with open borders, or with broad-spectrum invites.
If Bradley goes, Alex cannot go. The same is not true in reverse; Bradley is comfortable shrugging and just handling it.
Alex absolutely cannot be the person to raise the question “Hey, maybe we have to do something about this situation, vis-a-vis inviting Alex or Bradley.”
If Alex says that, this is inevitably interpreted as Alex doing something like taking hostages, or trying to divide up the universe and force people to take sides, or being emotionally immature and unreasonable. This is especially true because Bradley’s right there, providing a contrasting example of “it’s totally fine for us to coexist in the same room.” Alex will look like The Source Of The Problem.
However, it’s completely fine if someone else (Cameron) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
Like, the social fabric is probably intelligent enough to handle the division without assigning fault or blame. But that requires third-party action. It can’t come from Alex; it can maybe barely come from Bradley, if Bradley is particularly mature and savvy (but if Bradley doesn’t feel like it, Bradley can just not).
In a similar sense, I tried real hard to point out the transgressions being made in the LW thread and on Tumblr, and this ended up backfiring on me, even though I claim with high confidence that if the objections had been raised by someone else, most LWers would’ve agreed with them.
So in this post, I drew the strongest possible line-in-the-sand that I could, and then primarily have sat back, rather than naming names or pulling quotes or trying to get specific. People hoping that I would get specific are (I claim) naively mispredicting what the results would have been, had I done so.
In this case, I owe the largest debts of gratitude to Qiaochu and Vaniver, for being the Cameron to my Alex-Bradley situation. They are saying things that it is unpossible for me to say, because of the way humans tend to pattern-match in such situations.
This is indeed an important dynamic to discuss, so I’m glad you brought it up, but I think your judgment of the correct way to handle it is entirely wrong, and quite detrimental to the health of social groups and communities.
You say:
Alex will look like The Source Of The Problem.
But in fact Alex not only “will look like”, but in fact is, the source of the problem. In fact, the entirety of the problem is Alex’s emotional issues (and any consequences thereof, such as social discomfort inflicted upon third parties, conflicts that are generated due to Alex’s presence or behavior, etc.). There is no problem beyond or separately from that.
However, it’s completely fine if someone else (Charlie) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
This is only “fine” to the extent that the social group as a whole understands, and endorses, the fact that this “solution” constitutes taking Alex’s side.
Now, it is entirely possible that the social group does indeed understand and endorse this—that they are consciously taking Alex’s side. Maybe Alex is a good friend of many others in the group; whereas Bradley, while they like him well enough, is someone they only know through Alex—and thus they owe him a substantially lesser degree of loyalty than they do Alex. Such situations are common enough, and there is nothing inherently wrong with taking one person’s side over the other in such a case.
What is wrong is taking one person’s side, while pretending that you’re being impartial.
A truly impartial solution would look entirely different. It would look like this:
“Alex, Bradley, both of you are free to come, or not come, as you like. If one or both of you have emotional issues, or conflicts, or anything like that—work them out yourselves. Our [i.e., the group’s] relationship is with both of you separately and individually; we will thus continue to treat both of you equally and fairly, just as we treat every other one of us.”
As for Charlie… were I Bradley, I would interpret his comment as covert side-taking. (Once again: it may be justified, and not dishonorable at all. But it is absolutely not neutral.)
I think the view I’d take is somewhere in between this view and the view that Duncan described.
If I’m sending out invites to a small dinner party, I’d just alternate between inviting Alex and Bradley.
However, if it’s an open invite thing, it seems like the official policy should be that Alex and Bradley are both invited (assuming all parties are in good standing with the group in general), but if I happen to be close to Bradley I might privately suggest that they skip out on some events so that Alex can go, because that seems like the decent thing to do. (And if Bradley does skip, I would probably consider that closer to supererogatory rather than mandatory and award them social points for doing so.)
Similarly, if I’m close to Alex, I might nudge them towards doing whatever processing is necessary to allow them to coexist with Bradley, so that Bradley doesn’t have to skip.
So I’m agreeing with you that official policy for open-invite things should be that both are invited. But I think I’m disagreeing about whether it’s ever reasonable to expect Bradley to skip some events for the sake of Alex.
I think your data set is impoverished. I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley, and I think you can also easily come up with multiple situations in which Alex and Bradley are equally to blame. In your response, you have focused only on cases in which it’s Alex’s fault, as if they represent the totality of possibility, which seems sloppy or dishonest or knee-jerk or something (a little). Your “truly impartial” solution is quite appropriate in the cases where fault is roughly equally shared, but miscalibrated in the former. Indeed, it can result in tacit social endorsement of abuse in rare-but-not-extremely-rare sorts of situations.
Neither ‘blame’ nor ‘fault’ are anywhere in my comment.
And that’s the point: your perspective requires the group to assign blame, to adjudicate fault, to take sides. Mine does not. In my solution, the group treats what has transpired as something that’s between Alex and Bradley. The group takes no position on it. Alex now proclaims an inability to be in the same room as Bradley? Well, that’s unfortunate for Alex, but why should that affect the group’s relationship with Bradley? Alex has this problem; Alex will have to deal with it.
To treat the matter in any other way is to take sides.
You say:
I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley,
How can this be? By (your own) construction, Bradley is fine with things proceeding just as they always have, w.r.t. the group’s activities. Bradley makes no impositions; Bradley asks for no concessions; Bradley in fact neither does nor says anything unusual or unprecedented. If Alex were to act exactly as Bradley is acting, then the group might never even know that anything untoward had happened.
Once again: it may be right and proper for a group to take one person’s side in a conflict. (Such as in your ‘abuse’ example.) But it is dishonest, dishonorable, and ultimately corrosive to the social fabric, to take sides while pretending to be impartial.
But in fact Alex not only “will look like”, butin fact is, the source of the problem. In fact, theentirety of the problemis Alex’s emotional issues
… and then say “Neither ‘blame’ nor ‘fault’ are anywhere in my comment.” I smell a motte-and-bailey in that. There’s obviously a difference between blame games and fault analysis (in the former, one assigns moral weight and docks karma from a person’s holistic score; in the latter, one simply says “X caused Y”). But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
You seem to think that I’m claiming something like “it’s Alex’s fault that Alex feels this way”. But I’m claiming no such thing. In fact, basically the entirety of my point is that (in the “impartiality” scenario), as far as the group is concerned, it’s simply irrelevant why Alex feels this way. We can even go further and say: it’s irrelevant what Alex does or does not feel. Alex’s feelings are Alex’s business. The group is not interested in evaluating Alex’s feelings, in judging whether they are reasonable or unreasonable, in determine whether Alex is at fault for them or someone else is, etc. etc.
What I am saying is that Alex—specifically, Alex’s behavior (regardless of what feelings are or are not the cause of that behavior)—manifestly is the source of the problem for the group; that problem being, of course, “we now have to deal with one of our members refusing to be in the same room with another one of our members”.
As soon as you start asking why Alex feels this way, and whose fault is it that Alex feels this way, and whether it is reasonable for Alex to feel this way, etc., etc., you are committing yourself to some sort of side-taking. Here is what neutrality would look like:
Alex, to Group [i.e. spokesmember(s) thereof]: I can no longer stand to be in the same room as Bradley! Any event he’s invited to, I will not attend.
Group: Sounds like a bummer, man. Bradley’s invited to all public events, as you know (same as everyone else).
Alex: I have good reasons for feeling this way!
Group: Hey, that’s your own business. It’s not our place to evaluate your feelings, or judge whether they’re reasonable or not. Whether you come to things or not is, as always, your choice. You can attend, or not attend, for whatever reasons you like, or for no particular reason at all. You’re a free-willed adult—do what you think is best; you don’t owe us any explanations.
Alex: But it’s because…
Group (interrupting): No, really. It’s none of our business.
Alex: But if I have a really good reason for feeling this way, you’ll side with me, and stop inviting Bradley to things… right??
Group: Wrong.
Alex: Oh.
Separately:
But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
Responsibility is one thing, but Alex’s reaction is obviously entirely a property of Alex. I am perplexed by the suggestion that it can be otherwise.
Yeah but you can’t derive fault from property, because by your own admission your model makes no claim of fault. At most you can say that Alex is the immediate causal source of the problem.
Ah, but who will argue for the “Alex’s” who were genuinely made uncomfortable by the proposed norms of Dragon’s Army—perhaps to the point of disregarding even some good arguments and/or evidence in favor of it—and who are now being conflated with horribly abusive people as a direct result of this LW2 post? Social discomfort can be a two-way street.
So this parenthetical-within-the-parenthetical didn’t help, huh?
(here I am specifically not referring to those who pointed at valid failure modes and criticized the idea in constructive good faith, of whom there were many)
I guess one might not have had a clear picture what Duncan was counting as constructive criticism.
There were people like that, but there were also people who talked about the risks without sending ally-type signals of “but this is worth trying” or “on balance this is a good idea” who Duncan would then accuse of “bad faith” and “strawmanning,” and would lump in with the people you’re thinking.
I request specific quotations rather than your personal summary. I acknowledge that I have not been providing specific quotations myself, and have been providing my summary; I acknowledge that I’m asking you to meet a standard I have yet to meet myself, and that it’s entirely fair to ask me to meet it as well.
If you would like to proceed with both of us agreeing to the standard of “provide specific quotations with all of the relevant context, and taboo floating summaries and opinions,” then I’ll engage. Else, I’m going to take the fact that you created a brand-new account with a deliberately contrarian title as a signal that I should not-reply and should deal with you only through the moderation team.
Thank you Qiaochu_Yuan for this much-needed clarification! It seems kinda important to address this sort of ambiguity well before you start casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community. (--Thus, I think both habryka and Duncan have some good points in the debate about what sort of criticism should be allowed here, and what standards there should be for the ‘meta’ level of “criticizing critics” as wrongheaded, uncharitable or whatever.)
casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community!
I don’t understand what this is referring to. This discussion was always about epistemic norms, not object-level positions, although I agree that this could have been made clearer. From the OP:
I myself was wrong to engage with them as if their beliefs had cruxes that would respond to things like argument and evidence.
To be clear, I’m also unhappy with the way Duncan wrote the snark paragraph, and I personally would have either omitted it or been more specific about what I thought was bad.
I myself was wrong to engage with them as if their beliefs had cruxes that would respond to things like argument and evidence.
This is a fully-general-counterargument to any sort of involvement by people with even middling real-world concerns in LW2 - so if you mean to cite this remark approvingly as an example of how we should enforce our own standard of “perfectly rational” epistemic norms, I really have to oppose this. It is simply a fact about human psychology that “things like argument and evidence” are perhaps necessary but not sufficient to change people’s minds about issues of morality or politics that they actually care about, in a deep sense! This is the whole reason why Bernard Crick developed his own list of political virtues which I cited earlier in this very comment section. We should be very careful about this, and not let non-central examples on the object level skew our thinking about these matters.
I think a problem with this strategy is that the Chicken Littles don’t particularly like you or care about your opinion, and so the fact that you disapprove of their behavior has little to no deterrent effect.
It also risks a backfire effect. If one is in essence a troll happy to sneer at what rationalists do regardless of merit (e.g. “LOL, look at those losers trying to LARP enders game!”), seeing things like Duncan’s snarky parenthetical remarks would just spur me on, as it implies I’m successfully ‘getting a rise’ out of the target of my abuse.
It seems responses to criticism that is unpleasant or uncharitable are best addressed specifically to the offending remarks (if they’re on LW2, this seems like pointing out the fallacies/downvoting as appropriate), or just ignored. More broadcasted admonishment (“I know this doesn’t apply to everyone, but there’s this minority who said stupid things about this”) seems unlikely to marshall a corps of people who will act together to defend conversational norms, but bickering and uncertainty about whether or not one is included in this ‘bad fraction’.
(For similar reasons, I think amplifying rebuttals along the lines of, “You’re misinterpreting me, and that people who don’t interpret others correctly is one of the key problems with the LW community” seems apt to go poorly—few want to be painted as barbarians at the gates, and prompts those otherwise inclined to admit their mistake to instead double down or argue the case further.)
The “common knowledge” aspect implies e.g. other people not engaging with them, though. (And other people not looking down on Duncan for not engaging with them, although this is hard to measure, but still makes sense as a goal.)
I mean, I suspect I *am* one of the Chicken Littles, and here you are, engaging with me. :)
I would make a bet at fairly generous odds that no rattumb person who offered a negative opinion of Dragon Army will face social consequences they consider significant from having a negative opinion of Dragon Army.
I would make a bet at fairly generous odds that no rattumb person who offered a negative opinion of Dragon Army will face social consequences they consider significant from having a negative opinion of Dragon Army.
My model of social consequences is that most of them are silent; someone who could have helped you doesn’t, someone asked for a recommendation about you gives a negative one, you aren’t informed of events because you don’t get invited to them. This makes it difficult to adjudicate such bets; as you’d have to have people coming forward with silent disapprovals, which would have to be revealed to the person in question to determine their significance.
Hmm, this seems like a pretty important topic to discuss in more depth, but this week is also a uniquely bad period for me to spend time on this because I need to get everything ready for the move to LessWrong.com and also have some other urgent and time-consuming commitments. This doesn’t strike me as super urgent to resolve immediately, so I would leave this open for now and come back to it in a week when the big commitments I have are resolved, if that seems reasonable to you. I apologize for not being available as much as I would like for this.
I disagree—that sort of things makes me feel like it’s less likely to be worth it to be put in substantial effort to do good, legible criticism whereas my younger self, more inclined to trolling (a habit I try to avoid, these days), smells blood in the water.
I don’t know if you would have considered my criticism to be good or bad (although there’s a substantial chance you never saw it), but good criticism is a lot of work and therefore probably much easier to chill.
FWIF, the feeling I got from those parts of your post wasn’t “this is Duncan clearly discouraging future behavior of this type, by indicating that it’s outside the Overton window” but rather something like “if you were the kind of a person who attacked Duncan’s post in a bad way before, then this kind of defiance is going to annoy that kind of a person even more, prompting even further attacks on Duncan in the future”.
(this comment has been edited to retract a part of it)
“prompting even further attacks” … there’s now more flexible moderation on LW2.0, so those posts can simply be erased, with a simple explanation as to why. I don’t think we should fear-to-defy those people, I think we should defy them and then win.
(this comment has been edited to remove parts that are irrelevant now that an honest misunderstanding between me and Kaj has been pinpointed. Virtue points accrue to Kaj for embodying the norm of edits and updates and, through example, prompting me to do the same.)
(this comment has been significantly edited in response to the above retraction)
That’s my primary motivation, yeah. I agree that there’s a (significant) risk that I’m going about things the wrong way, and I do appreciate people weighing in on that side, but at the moment I’m trusting my intuitions and looking for concrete, gears-y, model-based arguments to update, because I am wary of the confusion of incentives and biases in the mix. The below HPMOR quotes feel really relevant to this question, to me:
“One answer is that you shouldn’t ever use violence except to stop violence,” Harry said. “You shouldn’t risk anyone’s life except to save even more lives. It sounds good when you say it like that. Only the problem is that if a police officer sees a burglar robbing a house, the police officer should try to stop the burglar, even though the burglar might fight back and someone might get hurt or even killed. Even if the burglar is only trying to steal jewelry, which is just a thing. Because if nobody so much as inconveniences burglars, there will be more burglars, and more burglars. And even if they only ever stole things each time, it would—the fabric of society—” Harry stopped. His thoughts weren’t as ordered as they usually pretended to be, in this room. He should have been able to give some perfectly logical exposition in terms of game theory, should have at least been able to see it that way, but it was eluding him. Hawks and doves—“Don’t you see, if evil people are willing to risk violence to get what they want, and good people always back down because violence is too terrible to risk, it’s—it’s not a good society to live in, Headmaster! Don’t you realize what all this bullying is doing to Hogwarts, to Slytherin House most of all?”
...
“There was a Muggle once named Mohandas Gandhi,” Harry said to the floor. “He thought the government of Muggle Britain shouldn’t rule over his country. And he refused to fight. He convinced his whole country not to fight. Instead he told his people to walk up to the British soldiers and let themselves be struck down, without resisting, and when Britain couldn’t stand doing that any more, we freed his country. I thought it was a very beautiful thing, when I read about it, I thought it was something higher than all the wars that anyone had ever fought with guns or swords. That they’d really done that, and that it had actually worked.” Harry drew another breath. “Only then I found out that Gandhi told his people, during World War II, that if the Nazis invaded they should use nonviolent resistance against them, too. But the Nazis would’ve just shot everyone in sight. And maybe Winston Churchill always felt that there should’ve been a better way, some clever way to win without having to hurt anyone; but he never found it, and so he had to fight.” Harry looked up at the Headmaster, who was staring at him. “Winston Churchill was the one who tried to convince the British government not to give Czechoslovakia to Hitler in exchange for a peace treaty, that they should fight right away—”
“I recognize the name, Harry,” said Dumbledore. The old wizard’s lips twitched upward. “Although honesty compels me to say that dear Winston was never one for pangs of conscience, even after a dozen shots of Firewhiskey.”
“The point is,” Harry said, after a brief pause to remember exactly who he was talking to, and fight down the suddenly returning sense that he was an ignorant child gone insane with audacity who had no right to be in this room and no right to question Albus Dumbledore about anything, “the point is, saying violence is evil isn’t an answer. It doesn’t say when to fight and when not to fight. It’s a hard question and Gandhi refused to deal with it, and that’s why I lost some of my respect for him.”
My primary disagreement with the moderation of LW2.0 has consistently been “at what threshold is ‘violence’ in this metaphorical sense correct? When is too soon, versus too late?”
I’m sorry. I re-read what you’d actually written and you’re right.
I’d been reading some discussion about something else slightly earlier, and then the temporal proximity of that-other-thing caused my impression of that-other-thing and my impression of what-Duncan-wrote to get mixed up. And then I didn’t re-read what you’d written before writing my comment, because I was intending to just briefly report on my initial impression rather than get into any detailed discussion, and I didn’t want my report of my initial impression to get contaminated by a re-read—not realizing that it had been contaminated already.
I have tremendous respect for the fact that you’re the type of person who could make a comment like the one above. That you a) sought out the source of confusion and conflict, b) took direct action to address it, and c) let me and others know what was going on.
I feel like saying something like “you get a million points,” but it’s more like, you already earned the points, out there in the territory, and me saying so is just writing it down on the map. I’ve edited my own comments as well, to remove the parts that are no longer relevant.
“Wolves, dogs, even chickens, fight for dominance among themselves. What I finally understood, from that clerk’s mind, was that to him Lucius Malfoy had dominance, Lord Voldemort had dominance, and David Monroe and Albus Dumbledore did not. By taking the side of good, by professing to abide in the light, they had made themselves unthreatening. In Britain, Lucius Malfoy has dominance, for he can call in your loans, or send Ministry bureaucrats against your shop, or crucify you in the Daily Prophet, if you go openly against his will. And the most powerful wizard in the world has no dominance, because everyone knows that he is a hero out of stories, relentlessly self-effacing and too humble for vengeance … In Hogwarts, Dumbledore does punish certain transgressions against his will, so he is feared to some degree, though the students still make free to mock him in more than whispers. Outside this castle, Dumbledore is sneered at; they began to call him mad, and he aped the part like a fool. Step into the role of a savior out of plays, and people see you as a slave to whose services they are entitled and whom it is their enjoyment to criticize; for it is the privilege of masters to sit back and call forth helpful corrections while the slaves labor … I understood that day in the Ministry that by envying Dumbledore, I had shown myself as deluded as Dumbledore himself. I understood that I had been trying for the wrong place all along. You should know this to be true, boy, for you have made freer to speak ill of Dumbledore than you ever dared speak ill of me. Even in your own thoughts, I wager, for instinct runs deep. You knew that it might be to your cost to mock the strong and vengeful Professor Quirrell, but that there was no cost in disrespecting the weak and harmless Dumbledore.”
… in at least some ways, it’s important to have Quirrells and Lucius Malfoys around on the side of LW’s culture, and not just David Monroes and Dumbledores.
… in at least some ways, it’s important to have Quirrells and Lucius Malfoys around on the side of LW’s culture, and not just David Monroes and Dumbledores.
This is an interesting point—and, ISTM, a reason not to be too demanding about people coming to LW itself with a purely “good faith” attitude! To some extent, “bad faith” and even fights for dominance just come with the territory of Hobbesian social and political struggle—and if you care about “hav[ing] Quirrells and Lucius Malfoys” on our side, you’re clearly making a point about politics as well, at least in the very broadest sense.
We totally have private messaging now! Please err on the side of using it for norm disputes so that participants don’t feel the eyes of the world on them so much.
(Not that norm disputes shouldn’t be public—absolutely we should have public discussion of norms—just it can be great to get past tense parts of it sometimes.)
I think it’s important to have public, common-knowledge deterrence of that sort of behavior. I think that part of what allowed it to flourish on LessWrong 1.0 is the absence of comments like my parenthetical, making it clear that that sort of thing is outside the Overton window
There is a very important distinction to be made here, between criticism of an online project like LessWrong itself or even LessWrong 2, where the natural focus is on loosely coordinating useful work to be performed “IRL” (the ‘think globally, act locally’ strategy) and people ‘criticizing’ a real-world, physical community where people are naturally defending against shared threats of bodily harm, and striving to foster a nurturing ‘ecology’ or environment. To put it as pithily as possible, the somewhat uncomfortable reality is that, psychologically, a real-world, physical community is _always_ a “safe space”, no matter whether it is explicitly connoted as such or not, or whether its members intend it as such or not; and yes, this “safe space” characterization comes with all the usual ‘political’ implications about the acceptability of criticism—except that these implications are actually a lot more cogent here than in your average social club on Tumblr or whatever! I do apologize for resorting to contentious “political” or even “tribal” language which seems to be frowned upon by the new moderation guidelines, but no “guidelines” or rules of politeness could possibly help us escape the obvious fact that doing something physically, in the real world always comes with very real political consequences which, as such, need to be addressed via an attitude that’s properly mindful and inclined to basic values such as compromise and adaptability—no matter what the context!
By the time I got to the end of this, I realized I wasn’t quite sure what you were trying to say.
Given that, I’m sort of shooting-in-the-dark, and may not actually be responding to your point …
1) I don’t think the difference between “talking about internet stuff” and “talking about stuff that’s happening IRL” has any meaningful relevance when it comes to standards of discourse. I imagined you feeling sympathy for people’s panic and tribalism and irrationality because they were looking at a real-life project with commensurately higher stakes; I don’t feel such sympathy myself. I don’t want to carve out an exception that says “intellectual dishonesty and immature discourse are okay if it’s a situation where you really care about something important, tho.”
2) I’m uncertain what you’re pointing at with your references to political dynamics, except possibly the thing where people feel pressure to object or defend not only because of their own beliefs, but also because of second-order effects (wanting to be seen to object or defend, wanting to embolden other objectors or defenders, not wanting to be held accountable for failing to object or defend and thereby become vulnerable to accusations of tacit support).
I reiterate that there was a lot of excellent, productive, and useful discourse from people who were unambiguously opposed to the idea. There were people who raised cogent objections politely, with models to back those objections up and concrete suggestions for next actions to ameliorate them.
Then there were the rationalists-in-name-only (I can think of a few specific ones in the Charter thread, and a few on Tumblr whose rants were forwarded to me) whose whole way of approaching intellectual disagreement is fundamentally wrong and corrosive and should be consistently, firmly, and unambiguously rejected. It’s like the thing where people say “we shouldn’t be so tolerant that we endorse wildly toxic and intolerant ranting that itself threatens the norm of tolerance,” only it’s even worse in this case because what’s at stake is whether our culture trends toward truthseeking at all.
I don’t think the difference between “talking about internet stuff” and “talking about stuff that’s happening IRL” has any meaningful relevance when it comes to standards of discourse.
Well, human psychology says that “stuff that’s happening IRL” kinda has to play by its own rules. Online social clubs simply aren’t treated the same by common sense ‘etiquette’ (much less common-sense morality!) as actual communities where people naturally have far higher stakes.
I don’t want to carve out an exception that says “intellectual dishonesty and immature discourse are okay if it’s a situation where you really care about something important,
If you think I’m advocating for willful dishonesty and immaturity, than you completely missed the point of what I was saying. Perhaps you are among those who intuitively associate “politics” or even “tribalism” with such vices (ignoring the obvious fact that a ‘group house’ itself is literally, inherently tribal—as in, defining a human “tribe”!) You may want to reference e.g. Bernard Crick’s short work In Defense of Politics (often assigned in intro poli-sci courses as required reading!) for a very different POV indeed of what “political” even means. Far beyond the usual ‘virtues of rationality’, other virtues such as adaptability, compromise, creativity etc. --even humor! are inherently political.
The flip side of this, though, is that people will often disagree about what’s intellectually dishonest or immature in the first place! Part of a productive attitude to contentious debate is an ability and inclination to look beyond these shallow attributions, to a more charitable view of even “very bad” arguments. Truth-seeking is OK and should always be a basic value, but it simply can’t be any sort of all-encompassing goal, when we’re dealing with real-world conmunities with all the attendant issues of those.
I still don’t follow what you’re actually advocating, though, or what specific thing you’re criticizing. Would you mind explaining to me like I’m five? Or, like, boiling it down into the kinds of short, concrete statements from which one could construct a symbolic logic argument?
I skimmed some of Crick and read some commentary on him, and Crick seems to take the Hobbesian “politics as a necessary compromise” viewpoint. (I wasn’t convinced by his definition of the word politics, which seemed not to point at what I would point at as politics.)
My best guess: I think they’re arguing not that immature discourse is okay, but that we need to be more polite toward people’s views in general for political reasons, as long as the people are acting somewhat in good faith (I suspect they think that you’re not being sufficiently polite toward those you’re trying to throw out of the overton window). As a result, we need to engage less in harsh criticism when it might be seen as threatening.
That being said, I also suspect that Duncan would agree that we need to be charitable. I suspect the actual disagreement is whether the behavior of the critics Duncan is replying to are actually the sort of behavior we want/need to accept in our community.
(Personally, I think we need to be more willing to do real-life experiments, even if they risk going somewhat wrong. And I think some of the tumblr criticism definitely fell out of what I would want in the overton window. So I’m okay with Duncan’s paranthetical, though it would have been nicer if it was more explicit who it was responding to.)
I suspect they think that you’re not being sufficiently polite toward those you’re trying to throw out of the overton window
Actually, what I would say here is that “politeness” itself (and that’s actually a pretty misleading term since we’re dealing with fairly important issues of morality and ethics, not just shallow etiquette—but whatever, let’s go with it) entails that we should seek a clear understanding of what attitudes we’re throwing out of the Overton window, and why, or out of what sort of specific concerns. There’s nothing wrong whatsoever with considering “harsh criticism [that] might be seen as threatening” as being outside the Overton window, but whereas this makes a lot of sense when dealing with real-world based efforts like the Dragon Army group, or the various “rationalist Baugruppes” that seem to be springing up in some places, it feels quite silly to let the same attitude infect our response to “criticism” of Less Wrong as an online site, or of LessWrong 2 for that matter, or even of the “rationalist” community not as an actual community that might be physically manifested in some place, but as a general shared mindset.
When we say that “the behavior of the critics Duncan is replying to are [not] the sort of behavior we want/need to accept in our community”, what do we actually mean by “behavior” and “community” here? Are we actually pointing out the real-world concerns inherent in “criticizing” an effort like Dragon Army in a harsh, unpolite and perhaps even threatening (if perhaps only in a political sense, such as by ‘threatening’ a loss of valued real-world allies!) way? Or are we using these terms in a metaphorical sense that could in some sense encompass everything we might “do” on the Internet as folks with a rationalist mindset? I see the very fact that it’s not really “explicit who (or what) [we’re] responding to” as a problem that needs to be addressed in some way, at least wrt. its broadest plausible implications—even though I definitely understand the political benefits of understating such things!
Ah, I think I now remember some of that stuff, though I think I only skimmed a small subset of it. What I remember did seem quite bad. I did not think of those when writing the above comment (though I don’t think it would have changed the general direction of my concerns, though it does change the magnitude of my worries).
I think this falls under the category ‘risks one must be willing to take, and which are much less bad than they look.’ If it turns out he was thinking of you, him saying this in writing doesn’t make things any worse. Damage, such as it is, is already done.
Yes, there’s the possible downside of ‘only the people he’s not actually talking about feel bad about him saying it’ but a little common knowledge-enabled smackdown seems necessary and helpful, and Duncan has a right to be upset and frustrated by the reception.
If it turns out he was thinking of you, him saying this in writing doesn’t make things any worse. Damage, such as it is, is already done.
This seems to imply that if person A thinks bad things of person B and says this out loud, then the only effect is that person B becomes aware of person A thinking bad things of them. But that only works if it’s a private conversation: person C finding out about this may make them also like B less, or make them like A less, or any number of other consequences.
My presumption was that person A thinks bad things about class of people D, which B may or may not belong to and is worried that B belongs to, but when others think of D they don’t think of B, so C’s opinion of B seems unlikely to change. If people assume B is in D, then that would be different (although likely still far less bad than it would feel like it was).
There seems to be a common thing where statements about a class of people D, will associate person B with class D by re-centering the category of D towards including B, even if it’s obvious that the original statement doesn’t refer to B. This seems like the kind of a case where that effect could plausibly apply (here, in case it’s not clear, B is a reasonable critic and D is the class of unreasonable critics).
Great post, really like the structure and really really appreciate the transparency.
However, the sections about critics did leave me with a bad taste in my mouth. I am pretty sure you aren’t including me in the set of uncharitable/wrongheaded critics, but I am not fully confident and I think this is kind of the problem. I imagine many of the people who had valid concerns and good criticism to feel like they are being dismissed, or at least assign a significant probability to it, and that this will have a significant chilling effect on good future criticism, which seems bad for everyone.
I think you definitely faced a lot of unfair criticism and verbal abuse, in a form that we would pretty heavily punish on the new LessWrong (man, that numbers guy), but I think it’s better to try to just ignore those critics, or be very careful to not accidentally use broad enough brushstrokes to also paint many of the good critics in the same light. I know you tried to avoid this by putting in explicit disclaimers, but it’s very hard from the inside to assess whether you think my criticism was constructed in good faith, and even on reflection I am not actually sure whether you think it was, and I expect others feel the same.
I want to emphasize again though that I am very happy you wrote this, and that overall this post has been quite valuable for me to read, and I am interested in chatting with you and others more about future iterations of Dragon Army and how to make it work. Keep up the good work.
Re: criticizing critics …
I think it’s important to have public, common-knowledge deterrence of that sort of behavior. I think that part of what allowed it to flourish on LessWrong 1.0 is the absence of comments like my parenthetical, making it clear that that sort of thing is outside the Overton window. I claim that there were not enough defenders in the garden, and I think that a lot of LW is too unwilling to outgroup and disincentivize behavior that no one wants to see, because of something like not wanting to seem illiberal or close-minded or unwilling-to-rationally-consider-the-possibility-that-they’re-wrong.
I recognize that this is a place where we disagree, and where indeed you could easily turn out to be more correct than me. But that parenthetical was carefully and deliberately constructed over the course of weeks, with me spending more than two full hours on its wording once you add up all the musing and tinkering and consideration of the consequences. It was Very Much On Purpose, and very much intended to have cultural effects.
I predict that the chilling effects on good criticism will be smaller-enough than the chilling effects on bad criticism that it’s net worthwhile, in the end. In particular, I think the fact that you had difficulty telling whether I was referring to you or not is a feature, not a bug—as a result, you booted up your metacognition and judging/evaluatory algorithms, and users doing that as a matter of habit is one of my hopes/cruxes for LessWrong. We don’t need crippling anxiety, but if I could push a button to make LW commenters 3% more self-consicous or 3% less self-conscious (3% more reflective and deliberate and self-evaluatory and afraid-of-their-own-biases-leaking-through or 3% less so) and it had to be across-the-board as opposed to distinguishing between different subsets of people … I know which I’d choose.
(I imagine you don’t want a long back-and-forth about this here; I would be willing to contribute to a discussion on this in Meta if you want.)
The weatherman who predicts a 20% chance of rain on a sunny day isn’t necessarily wrong. Even the weatherman who predicts 80% chance of rain on a sunny day isn’t *necessarily* wrong.
If there’s a norm of shaming critics who predict very bad outcomes, of the sort “20% chance this leads to disaster”, then after shaming them the first four times their prediction fails to come true, they’re not going to mention it the fifth time, and then nobody will be ready for the disaster.
I don’t know exactly how to square this with the genuine beneficial effects of making people have skin in the game for their predictions, except maybe for everyone to be more formal about it and have institutions that manage this sort of thing in an iterated way using good math. That’s why I’m glad you were willing to bet me about this, though I don’t know how to solve the general case.
I think it is hugely important to point out that this is not the norm Duncan is operating under or proposing. I understand Duncan as saying “hey, remember those people who were nasty and uncharitable and disgusted by me and my plans? Their predictions failed to come true.”
Like, quoting from you during the original discussion of the charter:
Of course, here, “the comments against it” isn’t ‘anyone who speaks about against the idea.’ One can take jbeshir’s comment as an example of someone pointing directly at the possibility of catastrophic abuse while maintaining good discourse and epistemic norms.
---
I note that I am generally not a fan of vaguebooking / making interventions on the abstract level instead of the object level, and if I were going to write a paragraph like the one in the OP I would have named names instead of making my claims high-context.
Agreed that some people were awful, but I still think this problem applies.
If somebody says “There’s a 80% chance of rain today, you idiot, and everyone who thinks otherwise deserves to die”, then it’s still not clear that a sunny day has proven them wrong. Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
To be clear, I agree with this. Furthermore, while I don’t remember people giving probability distributions, I think it’s fair to guess that critics as a whole (and likely even the irrational critics) put higher probability on the coarse description of what actually happened than Duncan or those of us that tried the experiment, and that makes an “I told you so!” about assigning lower probability to something that didn’t happen hollow.
I agree with this. Perhaps a better expression of the thing (if I had felt like it was the right spot in the piece to spend this many words) would’ve been:
I suspect that coming out of the gate with that many words would’ve pattern-matched to whining, though, and that my specific parenthetical was still stronger once you take into account social reality.
I’m curious if you a) agree or disagree or something-else with the quote above, and b) agree or disagree or something-else with my prediction that the above would’ve garnered a worse response.
The problem is absolutely not that people were predicting very bad outcomes. People on Tumblr were doing things like (I’m working from memory here) openly speculating about how incredibly evil and sick and twisted Duncan must be to even want to do anything like this, up to something like (again, working from memory here) talking about conspiring to take Duncan down somehow to prevent him from starting Dragon Army.
As someone who didn’t follow the original discussions either on Tumblr or LW, this was totally unclear from Duncan’s parenthetical remark in the OP. So I think for the purpose of “common-knowledge deterrence of that sort of behavior” that section totally failed, since lots of people must have, like Scott and I, gotten wrong ideas about what kind of behavior Duncan wanted to deter.
A part of my model here is that it’s impossible from a social perspective for me to point these things out explicitly.
I can’t describe the dynamic directly (my thinking contains some confusion) so I’ll point out an analogous thing.
Alex and Bradley have had a breakup.
Alex is more destabilized than Bradley, by the breakup—to the point that Alex finds it impossible to occupy the same space as Bradley. This is not a claim about Bradley being bad or in the wrong or responsible (nor the opposite); it’s just a brute fact about Alex’s emotional state.
There’s an event with open borders, or with broad-spectrum invites.
If Bradley goes, Alex cannot go. The same is not true in reverse; Bradley is comfortable shrugging and just handling it.
Alex absolutely cannot be the person to raise the question “Hey, maybe we have to do something about this situation, vis-a-vis inviting Alex or Bradley.”
If Alex says that, this is inevitably interpreted as Alex doing something like taking hostages, or trying to divide up the universe and force people to take sides, or being emotionally immature and unreasonable. This is especially true because Bradley’s right there, providing a contrasting example of “it’s totally fine for us to coexist in the same room.” Alex will look like The Source Of The Problem.
However, it’s completely fine if someone else (Cameron) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
Like, the social fabric is probably intelligent enough to handle the division without assigning fault or blame. But that requires third-party action. It can’t come from Alex; it can maybe barely come from Bradley, if Bradley is particularly mature and savvy (but if Bradley doesn’t feel like it, Bradley can just not).
In a similar sense, I tried real hard to point out the transgressions being made in the LW thread and on Tumblr, and this ended up backfiring on me, even though I claim with high confidence that if the objections had been raised by someone else, most LWers would’ve agreed with them.
So in this post, I drew the strongest possible line-in-the-sand that I could, and then primarily have sat back, rather than naming names or pulling quotes or trying to get specific. People hoping that I would get specific are (I claim) naively mispredicting what the results would have been, had I done so.
In this case, I owe the largest debts of gratitude to Qiaochu and Vaniver, for being the Cameron to my Alex-Bradley situation. They are saying things that it is unpossible for me to say, because of the way humans tend to pattern-match in such situations.
This concept feels to me like it deserves a top level post.
This is indeed an important dynamic to discuss, so I’m glad you brought it up, but I think your judgment of the correct way to handle it is entirely wrong, and quite detrimental to the health of social groups and communities.
You say:
But in fact Alex not only “will look like”, but in fact is, the source of the problem. In fact, the entirety of the problem is Alex’s emotional issues (and any consequences thereof, such as social discomfort inflicted upon third parties, conflicts that are generated due to Alex’s presence or behavior, etc.). There is no problem beyond or separately from that.
This is only “fine” to the extent that the social group as a whole understands, and endorses, the fact that this “solution” constitutes taking Alex’s side.
Now, it is entirely possible that the social group does indeed understand and endorse this—that they are consciously taking Alex’s side. Maybe Alex is a good friend of many others in the group; whereas Bradley, while they like him well enough, is someone they only know through Alex—and thus they owe him a substantially lesser degree of loyalty than they do Alex. Such situations are common enough, and there is nothing inherently wrong with taking one person’s side over the other in such a case.
What is wrong is taking one person’s side, while pretending that you’re being impartial.
A truly impartial solution would look entirely different. It would look like this:
“Alex, Bradley, both of you are free to come, or not come, as you like. If one or both of you have emotional issues, or conflicts, or anything like that—work them out yourselves. Our [i.e., the group’s] relationship is with both of you separately and individually; we will thus continue to treat both of you equally and fairly, just as we treat every other one of us.”
As for Charlie… were I Bradley, I would interpret his comment as covert side-taking. (Once again: it may be justified, and not dishonorable at all. But it is absolutely not neutral.)
I think the view I’d take is somewhere in between this view and the view that Duncan described.
If I’m sending out invites to a small dinner party, I’d just alternate between inviting Alex and Bradley.
However, if it’s an open invite thing, it seems like the official policy should be that Alex and Bradley are both invited (assuming all parties are in good standing with the group in general), but if I happen to be close to Bradley I might privately suggest that they skip out on some events so that Alex can go, because that seems like the decent thing to do. (And if Bradley does skip, I would probably consider that closer to supererogatory rather than mandatory and award them social points for doing so.)
Similarly, if I’m close to Alex, I might nudge them towards doing whatever processing is necessary to allow them to coexist with Bradley, so that Bradley doesn’t have to skip.
So I’m agreeing with you that official policy for open-invite things should be that both are invited. But I think I’m disagreeing about whether it’s ever reasonable to expect Bradley to skip some events for the sake of Alex.
I think your data set is impoverished. I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley, and I think you can also easily come up with multiple situations in which Alex and Bradley are equally to blame. In your response, you have focused only on cases in which it’s Alex’s fault, as if they represent the totality of possibility, which seems sloppy or dishonest or knee-jerk or something (a little). Your “truly impartial” solution is quite appropriate in the cases where fault is roughly equally shared, but miscalibrated in the former. Indeed, it can result in tacit social endorsement of abuse in rare-but-not-extremely-rare sorts of situations.
Neither ‘blame’ nor ‘fault’ are anywhere in my comment.
And that’s the point: your perspective requires the group to assign blame, to adjudicate fault, to take sides. Mine does not. In my solution, the group treats what has transpired as something that’s between Alex and Bradley. The group takes no position on it. Alex now proclaims an inability to be in the same room as Bradley? Well, that’s unfortunate for Alex, but why should that affect the group’s relationship with Bradley? Alex has this problem; Alex will have to deal with it.
To treat the matter in any other way is to take sides.
You say:
How can this be? By (your own) construction, Bradley is fine with things proceeding just as they always have, w.r.t. the group’s activities. Bradley makes no impositions; Bradley asks for no concessions; Bradley in fact neither does nor says anything unusual or unprecedented. If Alex were to act exactly as Bradley is acting, then the group might never even know that anything untoward had happened.
Once again: it may be right and proper for a group to take one person’s side in a conflict. (Such as in your ‘abuse’ example.) But it is dishonest, dishonorable, and ultimately corrosive to the social fabric, to take sides while pretending to be impartial.
I think it’s intellectually dishonest to write:
… and then say “Neither ‘blame’ nor ‘fault’ are anywhere in my comment.” I smell a motte-and-bailey in that. There’s obviously a difference between blame games and fault analysis (in the former, one assigns moral weight and docks karma from a person’s holistic score; in the latter, one simply says “X caused Y”). But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
I think you’re misunderstanding what I’m saying.
You seem to think that I’m claiming something like “it’s Alex’s fault that Alex feels this way”. But I’m claiming no such thing. In fact, basically the entirety of my point is that (in the “impartiality” scenario), as far as the group is concerned, it’s simply irrelevant why Alex feels this way. We can even go further and say: it’s irrelevant what Alex does or does not feel. Alex’s feelings are Alex’s business. The group is not interested in evaluating Alex’s feelings, in judging whether they are reasonable or unreasonable, in determine whether Alex is at fault for them or someone else is, etc. etc.
What I am saying is that Alex—specifically, Alex’s behavior (regardless of what feelings are or are not the cause of that behavior)—manifestly is the source of the problem for the group; that problem being, of course, “we now have to deal with one of our members refusing to be in the same room with another one of our members”.
As soon as you start asking why Alex feels this way, and whose fault is it that Alex feels this way, and whether it is reasonable for Alex to feel this way, etc., etc., you are committing yourself to some sort of side-taking. Here is what neutrality would look like:
Alex, to Group [i.e. spokesmember(s) thereof]: I can no longer stand to be in the same room as Bradley! Any event he’s invited to, I will not attend.
Group: Sounds like a bummer, man. Bradley’s invited to all public events, as you know (same as everyone else).
Alex: I have good reasons for feeling this way!
Group: Hey, that’s your own business. It’s not our place to evaluate your feelings, or judge whether they’re reasonable or not. Whether you come to things or not is, as always, your choice. You can attend, or not attend, for whatever reasons you like, or for no particular reason at all. You’re a free-willed adult—do what you think is best; you don’t owe us any explanations.
Alex: But it’s because…
Group (interrupting): No, really. It’s none of our business.
Alex: But if I have a really good reason for feeling this way, you’ll side with me, and stop inviting Bradley to things… right??
Group: Wrong.
Alex: Oh.
Separately:
Responsibility is one thing, but Alex’s reaction is obviously entirely a property of Alex. I am perplexed by the suggestion that it can be otherwise.
Yeah but you can’t derive fault from property, because by your own admission your model makes no claim of fault. At most you can say that Alex is the immediate causal source of the problem.
Who ever claimed otherwise?
Ah, but who will argue for the “Alex’s” who were genuinely made uncomfortable by the proposed norms of Dragon’s Army—perhaps to the point of disregarding even some good arguments and/or evidence in favor of it—and who are now being conflated with horribly abusive people as a direct result of this LW2 post? Social discomfort can be a two-way street.
I disagree with your summary and frame, and so cannot really respond to your question.
So this parenthetical-within-the-parenthetical didn’t help, huh?
I guess one might not have had a clear picture what Duncan was counting as constructive criticism.
There were people like that, but there were also people who talked about the risks without sending ally-type signals of “but this is worth trying” or “on balance this is a good idea” who Duncan would then accuse of “bad faith” and “strawmanning,” and would lump in with the people you’re thinking.
I request specific quotations rather than your personal summary. I acknowledge that I have not been providing specific quotations myself, and have been providing my summary; I acknowledge that I’m asking you to meet a standard I have yet to meet myself, and that it’s entirely fair to ask me to meet it as well.
If you would like to proceed with both of us agreeing to the standard of “provide specific quotations with all of the relevant context, and taboo floating summaries and opinions,” then I’ll engage. Else, I’m going to take the fact that you created a brand-new account with a deliberately contrarian title as a signal that I should not-reply and should deal with you only through the moderation team.
Thank you Qiaochu_Yuan for this much-needed clarification! It seems kinda important to address this sort of ambiguity well before you start casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community. (--Thus, I think both habryka and Duncan have some good points in the debate about what sort of criticism should be allowed here, and what standards there should be for the ‘meta’ level of “criticizing critics” as wrongheaded, uncharitable or whatever.)
I don’t understand what this is referring to. This discussion was always about epistemic norms, not object-level positions, although I agree that this could have been made clearer. From the OP:
To be clear, I’m also unhappy with the way Duncan wrote the snark paragraph, and I personally would have either omitted it or been more specific about what I thought was bad.
This is a fully-general-counterargument to any sort of involvement by people with even middling real-world concerns in LW2 - so if you mean to cite this remark approvingly as an example of how we should enforce our own standard of “perfectly rational” epistemic norms, I really have to oppose this. It is simply a fact about human psychology that “things like argument and evidence” are perhaps necessary but not sufficient to change people’s minds about issues of morality or politics that they actually care about, in a deep sense! This is the whole reason why Bernard Crick developed his own list of political virtues which I cited earlier in this very comment section. We should be very careful about this, and not let non-central examples on the object level skew our thinking about these matters.
I think a problem with this strategy is that the Chicken Littles don’t particularly like you or care about your opinion, and so the fact that you disapprove of their behavior has little to no deterrent effect.
+1
It also risks a backfire effect. If one is in essence a troll happy to sneer at what rationalists do regardless of merit (e.g. “LOL, look at those losers trying to LARP enders game!”), seeing things like Duncan’s snarky parenthetical remarks would just spur me on, as it implies I’m successfully ‘getting a rise’ out of the target of my abuse.
It seems responses to criticism that is unpleasant or uncharitable are best addressed specifically to the offending remarks (if they’re on LW2, this seems like pointing out the fallacies/downvoting as appropriate), or just ignored. More broadcasted admonishment (“I know this doesn’t apply to everyone, but there’s this minority who said stupid things about this”) seems unlikely to marshall a corps of people who will act together to defend conversational norms, but bickering and uncertainty about whether or not one is included in this ‘bad fraction’.
(For similar reasons, I think amplifying rebuttals along the lines of, “You’re misinterpreting me, and that people who don’t interpret others correctly is one of the key problems with the LW community” seems apt to go poorly—few want to be painted as barbarians at the gates, and prompts those otherwise inclined to admit their mistake to instead double down or argue the case further.)
The “common knowledge” aspect implies e.g. other people not engaging with them, though. (And other people not looking down on Duncan for not engaging with them, although this is hard to measure, but still makes sense as a goal.)
I mean, I suspect I *am* one of the Chicken Littles, and here you are, engaging with me. :)
I would make a bet at fairly generous odds that no rattumb person who offered a negative opinion of Dragon Army will face social consequences they consider significant from having a negative opinion of Dragon Army.
My model of social consequences is that most of them are silent; someone who could have helped you doesn’t, someone asked for a recommendation about you gives a negative one, you aren’t informed of events because you don’t get invited to them. This makes it difficult to adjudicate such bets; as you’d have to have people coming forward with silent disapprovals, which would have to be revealed to the person in question to determine their significance.
Hmm, this seems like a pretty important topic to discuss in more depth, but this week is also a uniquely bad period for me to spend time on this because I need to get everything ready for the move to LessWrong.com and also have some other urgent and time-consuming commitments. This doesn’t strike me as super urgent to resolve immediately, so I would leave this open for now and come back to it in a week when the big commitments I have are resolved, if that seems reasonable to you. I apologize for not being available as much as I would like for this.
No worries!
I disagree—that sort of things makes me feel like it’s less likely to be worth it to be put in substantial effort to do good, legible criticism whereas my younger self, more inclined to trolling (a habit I try to avoid, these days), smells blood in the water.
I don’t know if you would have considered my criticism to be good or bad (although there’s a substantial chance you never saw it), but good criticism is a lot of work and therefore probably much easier to chill.
FWIF, the feeling I got from those parts of your post wasn’t “this is Duncan clearly discouraging future behavior of this type, by indicating that it’s outside the Overton window” but rather something like “if you were the kind of a person who attacked Duncan’s post in a bad way before, then this kind of defiance is going to annoy that kind of a person even more, prompting even further attacks on Duncan in the future”.
(this comment has been edited to retract a part of it)
“prompting even further attacks” … there’s now more flexible moderation on LW2.0, so those posts can simply be erased, with a simple explanation as to why. I don’t think we should fear-to-defy those people, I think we should defy them and then win.
(this comment has been edited to remove parts that are irrelevant now that an honest misunderstanding between me and Kaj has been pinpointed. Virtue points accrue to Kaj for embodying the norm of edits and updates and, through example, prompting me to do the same.)
That may be, but didn’t you say that your motivation was to get people to engage in less such behavior in the future, rather than causing more of it?
[EDIT: the second half of this comment has been retracted]
(this comment has been significantly edited in response to the above retraction)
That’s my primary motivation, yeah. I agree that there’s a (significant) risk that I’m going about things the wrong way, and I do appreciate people weighing in on that side, but at the moment I’m trusting my intuitions and looking for concrete, gears-y, model-based arguments to update, because I am wary of the confusion of incentives and biases in the mix. The below HPMOR quotes feel really relevant to this question, to me:
...
My primary disagreement with the moderation of LW2.0 has consistently been “at what threshold is ‘violence’ in this metaphorical sense correct? When is too soon, versus too late?”
I’m sorry. I re-read what you’d actually written and you’re right.
I’d been reading some discussion about something else slightly earlier, and then the temporal proximity of that-other-thing caused my impression of that-other-thing and my impression of what-Duncan-wrote to get mixed up. And then I didn’t re-read what you’d written before writing my comment, because I was intending to just briefly report on my initial impression rather than get into any detailed discussion, and I didn’t want my report of my initial impression to get contaminated by a re-read—not realizing that it had been contaminated already.
Will edit my previous comments.
I have tremendous respect for the fact that you’re the type of person who could make a comment like the one above. That you a) sought out the source of confusion and conflict, b) took direct action to address it, and c) let me and others know what was going on.
I feel like saying something like “you get a million points,” but it’s more like, you already earned the points, out there in the territory, and me saying so is just writing it down on the map. I’ve edited my own comments as well, to remove the parts that are no longer relevant.
One other HPMOR quote that feels relevant:
… in at least some ways, it’s important to have Quirrells and Lucius Malfoys around on the side of LW’s culture, and not just David Monroes and Dumbledores.
This is an interesting point—and, ISTM, a reason not to be too demanding about people coming to LW itself with a purely “good faith” attitude! To some extent, “bad faith” and even fights for dominance just come with the territory of Hobbesian social and political struggle—and if you care about “hav[ing] Quirrells and Lucius Malfoys” on our side, you’re clearly making a point about politics as well, at least in the very broadest sense.
We totally have private messaging now! Please err on the side of using it for norm disputes so that participants don’t feel the eyes of the world on them so much.
(Not that norm disputes shouldn’t be public—absolutely we should have public discussion of norms—just it can be great to get past tense parts of it sometimes.)
There is a very important distinction to be made here, between criticism of an online project like LessWrong itself or even LessWrong 2, where the natural focus is on loosely coordinating useful work to be performed “IRL” (the ‘think globally, act locally’ strategy) and people ‘criticizing’ a real-world, physical community where people are naturally defending against shared threats of bodily harm, and striving to foster a nurturing ‘ecology’ or environment. To put it as pithily as possible, the somewhat uncomfortable reality is that, psychologically, a real-world, physical community is _always_ a “safe space”, no matter whether it is explicitly connoted as such or not, or whether its members intend it as such or not; and yes, this “safe space” characterization comes with all the usual ‘political’ implications about the acceptability of criticism—except that these implications are actually a lot more cogent here than in your average social club on Tumblr or whatever! I do apologize for resorting to contentious “political” or even “tribal” language which seems to be frowned upon by the new moderation guidelines, but no “guidelines” or rules of politeness could possibly help us escape the obvious fact that doing something physically, in the real world always comes with very real political consequences which, as such, need to be addressed via an attitude that’s properly mindful and inclined to basic values such as compromise and adaptability—no matter what the context!
By the time I got to the end of this, I realized I wasn’t quite sure what you were trying to say.
Given that, I’m sort of shooting-in-the-dark, and may not actually be responding to your point …
1) I don’t think the difference between “talking about internet stuff” and “talking about stuff that’s happening IRL” has any meaningful relevance when it comes to standards of discourse. I imagined you feeling sympathy for people’s panic and tribalism and irrationality because they were looking at a real-life project with commensurately higher stakes; I don’t feel such sympathy myself. I don’t want to carve out an exception that says “intellectual dishonesty and immature discourse are okay if it’s a situation where you really care about something important, tho.”
2) I’m uncertain what you’re pointing at with your references to political dynamics, except possibly the thing where people feel pressure to object or defend not only because of their own beliefs, but also because of second-order effects (wanting to be seen to object or defend, wanting to embolden other objectors or defenders, not wanting to be held accountable for failing to object or defend and thereby become vulnerable to accusations of tacit support).
I reiterate that there was a lot of excellent, productive, and useful discourse from people who were unambiguously opposed to the idea. There were people who raised cogent objections politely, with models to back those objections up and concrete suggestions for next actions to ameliorate them.
Then there were the rationalists-in-name-only (I can think of a few specific ones in the Charter thread, and a few on Tumblr whose rants were forwarded to me) whose whole way of approaching intellectual disagreement is fundamentally wrong and corrosive and should be consistently, firmly, and unambiguously rejected. It’s like the thing where people say “we shouldn’t be so tolerant that we endorse wildly toxic and intolerant ranting that itself threatens the norm of tolerance,” only it’s even worse in this case because what’s at stake is whether our culture trends toward truthseeking at all.
------------------------------------------------------------------------------------
I would appreciate it if you’re willing to restate what you wanted to convey, as I don’t think I ended up catching it.
Well, human psychology says that “stuff that’s happening IRL” kinda has to play by its own rules. Online social clubs simply aren’t treated the same by common sense ‘etiquette’ (much less common-sense morality!) as actual communities where people naturally have far higher stakes.
If you think I’m advocating for willful dishonesty and immaturity, than you completely missed the point of what I was saying. Perhaps you are among those who intuitively associate “politics” or even “tribalism” with such vices (ignoring the obvious fact that a ‘group house’ itself is literally, inherently tribal—as in, defining a human “tribe”!) You may want to reference e.g. Bernard Crick’s short work In Defense of Politics (often assigned in intro poli-sci courses as required reading!) for a very different POV indeed of what “political” even means. Far beyond the usual ‘virtues of rationality’, other virtues such as adaptability, compromise, creativity etc. --even humor! are inherently political.
The flip side of this, though, is that people will often disagree about what’s intellectually dishonest or immature in the first place! Part of a productive attitude to contentious debate is an ability and inclination to look beyond these shallow attributions, to a more charitable view of even “very bad” arguments. Truth-seeking is OK and should always be a basic value, but it simply can’t be any sort of all-encompassing goal, when we’re dealing with real-world conmunities with all the attendant issues of those.
I still don’t follow what you’re actually advocating, though, or what specific thing you’re criticizing. Would you mind explaining to me like I’m five? Or, like, boiling it down into the kinds of short, concrete statements from which one could construct a symbolic logic argument?
I skimmed some of Crick and read some commentary on him, and Crick seems to take the Hobbesian “politics as a necessary compromise” viewpoint. (I wasn’t convinced by his definition of the word politics, which seemed not to point at what I would point at as politics.)
My best guess: I think they’re arguing not that immature discourse is okay, but that we need to be more polite toward people’s views in general for political reasons, as long as the people are acting somewhat in good faith (I suspect they think that you’re not being sufficiently polite toward those you’re trying to throw out of the overton window). As a result, we need to engage less in harsh criticism when it might be seen as threatening.
That being said, I also suspect that Duncan would agree that we need to be charitable. I suspect the actual disagreement is whether the behavior of the critics Duncan is replying to are actually the sort of behavior we want/need to accept in our community.
(Personally, I think we need to be more willing to do real-life experiments, even if they risk going somewhat wrong. And I think some of the tumblr criticism definitely fell out of what I would want in the overton window. So I’m okay with Duncan’s paranthetical, though it would have been nicer if it was more explicit who it was responding to.)
Actually, what I would say here is that “politeness” itself (and that’s actually a pretty misleading term since we’re dealing with fairly important issues of morality and ethics, not just shallow etiquette—but whatever, let’s go with it) entails that we should seek a clear understanding of what attitudes we’re throwing out of the Overton window, and why, or out of what sort of specific concerns. There’s nothing wrong whatsoever with considering “harsh criticism [that] might be seen as threatening” as being outside the Overton window, but whereas this makes a lot of sense when dealing with real-world based efforts like the Dragon Army group, or the various “rationalist Baugruppes” that seem to be springing up in some places, it feels quite silly to let the same attitude infect our response to “criticism” of Less Wrong as an online site, or of LessWrong 2 for that matter, or even of the “rationalist” community not as an actual community that might be physically manifested in some place, but as a general shared mindset.
When we say that “the behavior of the critics Duncan is replying to are [not] the sort of behavior we want/need to accept in our community”, what do we actually mean by “behavior” and “community” here? Are we actually pointing out the real-world concerns inherent in “criticizing” an effort like Dragon Army in a harsh, unpolite and perhaps even threatening (if perhaps only in a political sense, such as by ‘threatening’ a loss of valued real-world allies!) way? Or are we using these terms in a metaphorical sense that could in some sense encompass everything we might “do” on the Internet as folks with a rationalist mindset? I see the very fact that it’s not really “explicit who (or what) [we’re] responding to” as a problem that needs to be addressed in some way, at least wrt. its broadest plausible implications—even though I definitely understand the political benefits of understating such things!
Oli, I’m not sure if you saw the Tumblr criticism, but it was really bad, in some ways even worse than the numbers guy.
Ah, I think I now remember some of that stuff, though I think I only skimmed a small subset of it. What I remember did seem quite bad. I did not think of those when writing the above comment (though I don’t think it would have changed the general direction of my concerns, though it does change the magnitude of my worries).
Including some from rationality community members with relatively high social standing.
I think this falls under the category ‘risks one must be willing to take, and which are much less bad than they look.’ If it turns out he was thinking of you, him saying this in writing doesn’t make things any worse. Damage, such as it is, is already done.
Yes, there’s the possible downside of ‘only the people he’s not actually talking about feel bad about him saying it’ but a little common knowledge-enabled smackdown seems necessary and helpful, and Duncan has a right to be upset and frustrated by the reception.
This seems to imply that if person A thinks bad things of person B and says this out loud, then the only effect is that person B becomes aware of person A thinking bad things of them. But that only works if it’s a private conversation: person C finding out about this may make them also like B less, or make them like A less, or any number of other consequences.
Yep, the chilling effect comes from the public ridicule, not Duncan’s individual judgement.
My presumption was that person A thinks bad things about class of people D, which B may or may not belong to and is worried that B belongs to, but when others think of D they don’t think of B, so C’s opinion of B seems unlikely to change. If people assume B is in D, then that would be different (although likely still far less bad than it would feel like it was).
There seems to be a common thing where statements about a class of people D, will associate person B with class D by re-centering the category of D towards including B, even if it’s obvious that the original statement doesn’t refer to B. This seems like the kind of a case where that effect could plausibly apply (here, in case it’s not clear, B is a reasonable critic and D is the class of unreasonable critics).