The weatherman who predicts a 20% chance of rain on a sunny day isn’t necessarily wrong. Even the weatherman who predicts 80% chance of rain on a sunny day isn’t *necessarily* wrong.
If there’s a norm of shaming critics who predict very bad outcomes, of the sort “20% chance this leads to disaster”, then after shaming them the first four times their prediction fails to come true, they’re not going to mention it the fifth time, and then nobody will be ready for the disaster.
I don’t know exactly how to square this with the genuine beneficial effects of making people have skin in the game for their predictions, except maybe for everyone to be more formal about it and have institutions that manage this sort of thing in an iterated way using good math. That’s why I’m glad you were willing to bet me about this, though I don’t know how to solve the general case.
If there’s a norm of shaming critics who predict very bad outcomes
I think it is hugely important to point out that this is not the norm Duncan is operating under or proposing. I understand Duncan as saying “hey, remember those people who were nasty and uncharitable and disgusted by me and my plans? Their predictions failed to come true.”
Like, quoting from you during the original discussion of the charter:
I would never participate in the linked concept and I think it will probably fail, maybe disastrously.
But I also have a (only partially endorsed) squick reaction to the comments against it. I guess I take it as more axiomatic than other people that if people want to try something weird, and are only harming themselves, that if you make fun of them for it, you’re a bully.
Of course, here, “the comments against it” isn’t ‘anyone who speaks about against the idea.’ One can take jbeshir’s comment as an example of someone pointing directly at the possibility of catastrophic abuse while maintaining good discourse and epistemic norms.
---
I note that I am generally not a fan of vaguebooking / making interventions on the abstract level instead of the object level, and if I were going to write a paragraph like the one in the OP I would have named names instead of making my claims high-context.
Agreed that some people were awful, but I still think this problem applies.
If somebody says “There’s a 80% chance of rain today, you idiot, and everyone who thinks otherwise deserves to die”, then it’s still not clear that a sunny day has proven them wrong. Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
To be clear, I agree with this. Furthermore, while I don’t remember people giving probability distributions, I think it’s fair to guess that critics as a whole (and likely even the irrational critics) put higher probability on the coarse description of what actually happened than Duncan or those of us that tried the experiment, and that makes an “I told you so!” about assigning lower probability to something that didn’t happen hollow.
I agree with this. Perhaps a better expression of the thing (if I had felt like it was the right spot in the piece to spend this many words) would’ve been:
they were systematically wrong then, in loudly espousing beliefs whose truth value was genuinely in question but for which they had insufficient justification, and wrong in terms of their belongingness within the culture of a group of people who want to call themselves “rationalists” and who care about making incremental progress toward actual truth, and I believe that the sacrifice of their specific, non-zero, non-useless data and perspective is well worth making to have the correct walls around our garden and weeding heuristics within it. And I see no reason for that to have changed in the intervening six months.
I suspect that coming out of the gate with that many words would’ve pattern-matched to whining, though, and that my specific parenthetical was still stronger once you take into account social reality.
I’m curious if you a) agree or disagree or something-else with the quote above, and b) agree or disagree or something-else with my prediction that the above would’ve garnered a worse response.
The problem is absolutely not that people were predicting very bad outcomes. People on Tumblr were doing things like (I’m working from memory here) openly speculating about how incredibly evil and sick and twisted Duncan must be to even want to do anything like this, up to something like (again, working from memory here) talking about conspiring to take Duncan down somehow to prevent him from starting Dragon Army.
As someone who didn’t follow the original discussions either on Tumblr or LW, this was totally unclear from Duncan’s parenthetical remark in the OP. So I think for the purpose of “common-knowledge deterrence of that sort of behavior” that section totally failed, since lots of people must have, like Scott and I, gotten wrong ideas about what kind of behavior Duncan wanted to deter.
A part of my model here is that it’s impossible from a social perspective for me to point these things out explicitly.
I can’t describe the dynamic directly (my thinking contains some confusion) so I’ll point out an analogous thing.
Alex and Bradley have had a breakup.
Alex is more destabilized than Bradley, by the breakup—to the point that Alex finds it impossible to occupy the same space as Bradley. This is not a claim about Bradley being bad or in the wrong or responsible (nor the opposite); it’s just a brute fact about Alex’s emotional state.
There’s an event with open borders, or with broad-spectrum invites.
If Bradley goes, Alex cannot go. The same is not true in reverse; Bradley is comfortable shrugging and just handling it.
Alex absolutely cannot be the person to raise the question “Hey, maybe we have to do something about this situation, vis-a-vis inviting Alex or Bradley.”
If Alex says that, this is inevitably interpreted as Alex doing something like taking hostages, or trying to divide up the universe and force people to take sides, or being emotionally immature and unreasonable. This is especially true because Bradley’s right there, providing a contrasting example of “it’s totally fine for us to coexist in the same room.” Alex will look like The Source Of The Problem.
However, it’s completely fine if someone else (Cameron) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
Like, the social fabric is probably intelligent enough to handle the division without assigning fault or blame. But that requires third-party action. It can’t come from Alex; it can maybe barely come from Bradley, if Bradley is particularly mature and savvy (but if Bradley doesn’t feel like it, Bradley can just not).
In a similar sense, I tried real hard to point out the transgressions being made in the LW thread and on Tumblr, and this ended up backfiring on me, even though I claim with high confidence that if the objections had been raised by someone else, most LWers would’ve agreed with them.
So in this post, I drew the strongest possible line-in-the-sand that I could, and then primarily have sat back, rather than naming names or pulling quotes or trying to get specific. People hoping that I would get specific are (I claim) naively mispredicting what the results would have been, had I done so.
In this case, I owe the largest debts of gratitude to Qiaochu and Vaniver, for being the Cameron to my Alex-Bradley situation. They are saying things that it is unpossible for me to say, because of the way humans tend to pattern-match in such situations.
This is indeed an important dynamic to discuss, so I’m glad you brought it up, but I think your judgment of the correct way to handle it is entirely wrong, and quite detrimental to the health of social groups and communities.
You say:
Alex will look like The Source Of The Problem.
But in fact Alex not only “will look like”, but in fact is, the source of the problem. In fact, the entirety of the problem is Alex’s emotional issues (and any consequences thereof, such as social discomfort inflicted upon third parties, conflicts that are generated due to Alex’s presence or behavior, etc.). There is no problem beyond or separately from that.
However, it’s completely fine if someone else (Charlie) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
This is only “fine” to the extent that the social group as a whole understands, and endorses, the fact that this “solution” constitutes taking Alex’s side.
Now, it is entirely possible that the social group does indeed understand and endorse this—that they are consciously taking Alex’s side. Maybe Alex is a good friend of many others in the group; whereas Bradley, while they like him well enough, is someone they only know through Alex—and thus they owe him a substantially lesser degree of loyalty than they do Alex. Such situations are common enough, and there is nothing inherently wrong with taking one person’s side over the other in such a case.
What is wrong is taking one person’s side, while pretending that you’re being impartial.
A truly impartial solution would look entirely different. It would look like this:
“Alex, Bradley, both of you are free to come, or not come, as you like. If one or both of you have emotional issues, or conflicts, or anything like that—work them out yourselves. Our [i.e., the group’s] relationship is with both of you separately and individually; we will thus continue to treat both of you equally and fairly, just as we treat every other one of us.”
As for Charlie… were I Bradley, I would interpret his comment as covert side-taking. (Once again: it may be justified, and not dishonorable at all. But it is absolutely not neutral.)
I think the view I’d take is somewhere in between this view and the view that Duncan described.
If I’m sending out invites to a small dinner party, I’d just alternate between inviting Alex and Bradley.
However, if it’s an open invite thing, it seems like the official policy should be that Alex and Bradley are both invited (assuming all parties are in good standing with the group in general), but if I happen to be close to Bradley I might privately suggest that they skip out on some events so that Alex can go, because that seems like the decent thing to do. (And if Bradley does skip, I would probably consider that closer to supererogatory rather than mandatory and award them social points for doing so.)
Similarly, if I’m close to Alex, I might nudge them towards doing whatever processing is necessary to allow them to coexist with Bradley, so that Bradley doesn’t have to skip.
So I’m agreeing with you that official policy for open-invite things should be that both are invited. But I think I’m disagreeing about whether it’s ever reasonable to expect Bradley to skip some events for the sake of Alex.
I think your data set is impoverished. I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley, and I think you can also easily come up with multiple situations in which Alex and Bradley are equally to blame. In your response, you have focused only on cases in which it’s Alex’s fault, as if they represent the totality of possibility, which seems sloppy or dishonest or knee-jerk or something (a little). Your “truly impartial” solution is quite appropriate in the cases where fault is roughly equally shared, but miscalibrated in the former. Indeed, it can result in tacit social endorsement of abuse in rare-but-not-extremely-rare sorts of situations.
Neither ‘blame’ nor ‘fault’ are anywhere in my comment.
And that’s the point: your perspective requires the group to assign blame, to adjudicate fault, to take sides. Mine does not. In my solution, the group treats what has transpired as something that’s between Alex and Bradley. The group takes no position on it. Alex now proclaims an inability to be in the same room as Bradley? Well, that’s unfortunate for Alex, but why should that affect the group’s relationship with Bradley? Alex has this problem; Alex will have to deal with it.
To treat the matter in any other way is to take sides.
You say:
I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley,
How can this be? By (your own) construction, Bradley is fine with things proceeding just as they always have, w.r.t. the group’s activities. Bradley makes no impositions; Bradley asks for no concessions; Bradley in fact neither does nor says anything unusual or unprecedented. If Alex were to act exactly as Bradley is acting, then the group might never even know that anything untoward had happened.
Once again: it may be right and proper for a group to take one person’s side in a conflict. (Such as in your ‘abuse’ example.) But it is dishonest, dishonorable, and ultimately corrosive to the social fabric, to take sides while pretending to be impartial.
But in fact Alex not only “will look like”, butin fact is, the source of the problem. In fact, theentirety of the problemis Alex’s emotional issues
… and then say “Neither ‘blame’ nor ‘fault’ are anywhere in my comment.” I smell a motte-and-bailey in that. There’s obviously a difference between blame games and fault analysis (in the former, one assigns moral weight and docks karma from a person’s holistic score; in the latter, one simply says “X caused Y”). But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
You seem to think that I’m claiming something like “it’s Alex’s fault that Alex feels this way”. But I’m claiming no such thing. In fact, basically the entirety of my point is that (in the “impartiality” scenario), as far as the group is concerned, it’s simply irrelevant why Alex feels this way. We can even go further and say: it’s irrelevant what Alex does or does not feel. Alex’s feelings are Alex’s business. The group is not interested in evaluating Alex’s feelings, in judging whether they are reasonable or unreasonable, in determine whether Alex is at fault for them or someone else is, etc. etc.
What I am saying is that Alex—specifically, Alex’s behavior (regardless of what feelings are or are not the cause of that behavior)—manifestly is the source of the problem for the group; that problem being, of course, “we now have to deal with one of our members refusing to be in the same room with another one of our members”.
As soon as you start asking why Alex feels this way, and whose fault is it that Alex feels this way, and whether it is reasonable for Alex to feel this way, etc., etc., you are committing yourself to some sort of side-taking. Here is what neutrality would look like:
Alex, to Group [i.e. spokesmember(s) thereof]: I can no longer stand to be in the same room as Bradley! Any event he’s invited to, I will not attend.
Group: Sounds like a bummer, man. Bradley’s invited to all public events, as you know (same as everyone else).
Alex: I have good reasons for feeling this way!
Group: Hey, that’s your own business. It’s not our place to evaluate your feelings, or judge whether they’re reasonable or not. Whether you come to things or not is, as always, your choice. You can attend, or not attend, for whatever reasons you like, or for no particular reason at all. You’re a free-willed adult—do what you think is best; you don’t owe us any explanations.
Alex: But it’s because…
Group (interrupting): No, really. It’s none of our business.
Alex: But if I have a really good reason for feeling this way, you’ll side with me, and stop inviting Bradley to things… right??
Group: Wrong.
Alex: Oh.
Separately:
But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
Responsibility is one thing, but Alex’s reaction is obviously entirely a property of Alex. I am perplexed by the suggestion that it can be otherwise.
Yeah but you can’t derive fault from property, because by your own admission your model makes no claim of fault. At most you can say that Alex is the immediate causal source of the problem.
Ah, but who will argue for the “Alex’s” who were genuinely made uncomfortable by the proposed norms of Dragon’s Army—perhaps to the point of disregarding even some good arguments and/or evidence in favor of it—and who are now being conflated with horribly abusive people as a direct result of this LW2 post? Social discomfort can be a two-way street.
So this parenthetical-within-the-parenthetical didn’t help, huh?
(here I am specifically not referring to those who pointed at valid failure modes and criticized the idea in constructive good faith, of whom there were many)
I guess one might not have had a clear picture what Duncan was counting as constructive criticism.
There were people like that, but there were also people who talked about the risks without sending ally-type signals of “but this is worth trying” or “on balance this is a good idea” who Duncan would then accuse of “bad faith” and “strawmanning,” and would lump in with the people you’re thinking.
I request specific quotations rather than your personal summary. I acknowledge that I have not been providing specific quotations myself, and have been providing my summary; I acknowledge that I’m asking you to meet a standard I have yet to meet myself, and that it’s entirely fair to ask me to meet it as well.
If you would like to proceed with both of us agreeing to the standard of “provide specific quotations with all of the relevant context, and taboo floating summaries and opinions,” then I’ll engage. Else, I’m going to take the fact that you created a brand-new account with a deliberately contrarian title as a signal that I should not-reply and should deal with you only through the moderation team.
Thank you Qiaochu_Yuan for this much-needed clarification! It seems kinda important to address this sort of ambiguity well before you start casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community. (--Thus, I think both habryka and Duncan have some good points in the debate about what sort of criticism should be allowed here, and what standards there should be for the ‘meta’ level of “criticizing critics” as wrongheaded, uncharitable or whatever.)
casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community!
I don’t understand what this is referring to. This discussion was always about epistemic norms, not object-level positions, although I agree that this could have been made clearer. From the OP:
I myself was wrong to engage with them as if their beliefs had cruxes that would respond to things like argument and evidence.
To be clear, I’m also unhappy with the way Duncan wrote the snark paragraph, and I personally would have either omitted it or been more specific about what I thought was bad.
I myself was wrong to engage with them as if their beliefs had cruxes that would respond to things like argument and evidence.
This is a fully-general-counterargument to any sort of involvement by people with even middling real-world concerns in LW2 - so if you mean to cite this remark approvingly as an example of how we should enforce our own standard of “perfectly rational” epistemic norms, I really have to oppose this. It is simply a fact about human psychology that “things like argument and evidence” are perhaps necessary but not sufficient to change people’s minds about issues of morality or politics that they actually care about, in a deep sense! This is the whole reason why Bernard Crick developed his own list of political virtues which I cited earlier in this very comment section. We should be very careful about this, and not let non-central examples on the object level skew our thinking about these matters.
The weatherman who predicts a 20% chance of rain on a sunny day isn’t necessarily wrong. Even the weatherman who predicts 80% chance of rain on a sunny day isn’t *necessarily* wrong.
If there’s a norm of shaming critics who predict very bad outcomes, of the sort “20% chance this leads to disaster”, then after shaming them the first four times their prediction fails to come true, they’re not going to mention it the fifth time, and then nobody will be ready for the disaster.
I don’t know exactly how to square this with the genuine beneficial effects of making people have skin in the game for their predictions, except maybe for everyone to be more formal about it and have institutions that manage this sort of thing in an iterated way using good math. That’s why I’m glad you were willing to bet me about this, though I don’t know how to solve the general case.
I think it is hugely important to point out that this is not the norm Duncan is operating under or proposing. I understand Duncan as saying “hey, remember those people who were nasty and uncharitable and disgusted by me and my plans? Their predictions failed to come true.”
Like, quoting from you during the original discussion of the charter:
Of course, here, “the comments against it” isn’t ‘anyone who speaks about against the idea.’ One can take jbeshir’s comment as an example of someone pointing directly at the possibility of catastrophic abuse while maintaining good discourse and epistemic norms.
---
I note that I am generally not a fan of vaguebooking / making interventions on the abstract level instead of the object level, and if I were going to write a paragraph like the one in the OP I would have named names instead of making my claims high-context.
Agreed that some people were awful, but I still think this problem applies.
If somebody says “There’s a 80% chance of rain today, you idiot, and everyone who thinks otherwise deserves to die”, then it’s still not clear that a sunny day has proven them wrong. Or rather, they were always wrong to be a jerk, but a single run of the experiment doesn’t do much to prove they were wronger than we already believed.
To be clear, I agree with this. Furthermore, while I don’t remember people giving probability distributions, I think it’s fair to guess that critics as a whole (and likely even the irrational critics) put higher probability on the coarse description of what actually happened than Duncan or those of us that tried the experiment, and that makes an “I told you so!” about assigning lower probability to something that didn’t happen hollow.
I agree with this. Perhaps a better expression of the thing (if I had felt like it was the right spot in the piece to spend this many words) would’ve been:
I suspect that coming out of the gate with that many words would’ve pattern-matched to whining, though, and that my specific parenthetical was still stronger once you take into account social reality.
I’m curious if you a) agree or disagree or something-else with the quote above, and b) agree or disagree or something-else with my prediction that the above would’ve garnered a worse response.
The problem is absolutely not that people were predicting very bad outcomes. People on Tumblr were doing things like (I’m working from memory here) openly speculating about how incredibly evil and sick and twisted Duncan must be to even want to do anything like this, up to something like (again, working from memory here) talking about conspiring to take Duncan down somehow to prevent him from starting Dragon Army.
As someone who didn’t follow the original discussions either on Tumblr or LW, this was totally unclear from Duncan’s parenthetical remark in the OP. So I think for the purpose of “common-knowledge deterrence of that sort of behavior” that section totally failed, since lots of people must have, like Scott and I, gotten wrong ideas about what kind of behavior Duncan wanted to deter.
A part of my model here is that it’s impossible from a social perspective for me to point these things out explicitly.
I can’t describe the dynamic directly (my thinking contains some confusion) so I’ll point out an analogous thing.
Alex and Bradley have had a breakup.
Alex is more destabilized than Bradley, by the breakup—to the point that Alex finds it impossible to occupy the same space as Bradley. This is not a claim about Bradley being bad or in the wrong or responsible (nor the opposite); it’s just a brute fact about Alex’s emotional state.
There’s an event with open borders, or with broad-spectrum invites.
If Bradley goes, Alex cannot go. The same is not true in reverse; Bradley is comfortable shrugging and just handling it.
Alex absolutely cannot be the person to raise the question “Hey, maybe we have to do something about this situation, vis-a-vis inviting Alex or Bradley.”
If Alex says that, this is inevitably interpreted as Alex doing something like taking hostages, or trying to divide up the universe and force people to take sides, or being emotionally immature and unreasonable. This is especially true because Bradley’s right there, providing a contrasting example of “it’s totally fine for us to coexist in the same room.” Alex will look like The Source Of The Problem.
However, it’s completely fine if someone else (Cameron) says “Hey, look—I think Alex needs more space, and we as a social group should figure out some way to create space for the processing and healing to happen. Maybe we invite Bradley to this one, but tell both Bradley and Alex that we’ll invite Alex-and-only-Alex to the next one?”
Like, the social fabric is probably intelligent enough to handle the division without assigning fault or blame. But that requires third-party action. It can’t come from Alex; it can maybe barely come from Bradley, if Bradley is particularly mature and savvy (but if Bradley doesn’t feel like it, Bradley can just not).
In a similar sense, I tried real hard to point out the transgressions being made in the LW thread and on Tumblr, and this ended up backfiring on me, even though I claim with high confidence that if the objections had been raised by someone else, most LWers would’ve agreed with them.
So in this post, I drew the strongest possible line-in-the-sand that I could, and then primarily have sat back, rather than naming names or pulling quotes or trying to get specific. People hoping that I would get specific are (I claim) naively mispredicting what the results would have been, had I done so.
In this case, I owe the largest debts of gratitude to Qiaochu and Vaniver, for being the Cameron to my Alex-Bradley situation. They are saying things that it is unpossible for me to say, because of the way humans tend to pattern-match in such situations.
This concept feels to me like it deserves a top level post.
This is indeed an important dynamic to discuss, so I’m glad you brought it up, but I think your judgment of the correct way to handle it is entirely wrong, and quite detrimental to the health of social groups and communities.
You say:
But in fact Alex not only “will look like”, but in fact is, the source of the problem. In fact, the entirety of the problem is Alex’s emotional issues (and any consequences thereof, such as social discomfort inflicted upon third parties, conflicts that are generated due to Alex’s presence or behavior, etc.). There is no problem beyond or separately from that.
This is only “fine” to the extent that the social group as a whole understands, and endorses, the fact that this “solution” constitutes taking Alex’s side.
Now, it is entirely possible that the social group does indeed understand and endorse this—that they are consciously taking Alex’s side. Maybe Alex is a good friend of many others in the group; whereas Bradley, while they like him well enough, is someone they only know through Alex—and thus they owe him a substantially lesser degree of loyalty than they do Alex. Such situations are common enough, and there is nothing inherently wrong with taking one person’s side over the other in such a case.
What is wrong is taking one person’s side, while pretending that you’re being impartial.
A truly impartial solution would look entirely different. It would look like this:
“Alex, Bradley, both of you are free to come, or not come, as you like. If one or both of you have emotional issues, or conflicts, or anything like that—work them out yourselves. Our [i.e., the group’s] relationship is with both of you separately and individually; we will thus continue to treat both of you equally and fairly, just as we treat every other one of us.”
As for Charlie… were I Bradley, I would interpret his comment as covert side-taking. (Once again: it may be justified, and not dishonorable at all. But it is absolutely not neutral.)
I think the view I’d take is somewhere in between this view and the view that Duncan described.
If I’m sending out invites to a small dinner party, I’d just alternate between inviting Alex and Bradley.
However, if it’s an open invite thing, it seems like the official policy should be that Alex and Bradley are both invited (assuming all parties are in good standing with the group in general), but if I happen to be close to Bradley I might privately suggest that they skip out on some events so that Alex can go, because that seems like the decent thing to do. (And if Bradley does skip, I would probably consider that closer to supererogatory rather than mandatory and award them social points for doing so.)
Similarly, if I’m close to Alex, I might nudge them towards doing whatever processing is necessary to allow them to coexist with Bradley, so that Bradley doesn’t have to skip.
So I’m agreeing with you that official policy for open-invite things should be that both are invited. But I think I’m disagreeing about whether it’s ever reasonable to expect Bradley to skip some events for the sake of Alex.
I think your data set is impoverished. I think you could, in the space of five-minutes-by-the-clock, easily come up with multiple situations in which Alex is not at all the source of the problem, but rather Bradley, and I think you can also easily come up with multiple situations in which Alex and Bradley are equally to blame. In your response, you have focused only on cases in which it’s Alex’s fault, as if they represent the totality of possibility, which seems sloppy or dishonest or knee-jerk or something (a little). Your “truly impartial” solution is quite appropriate in the cases where fault is roughly equally shared, but miscalibrated in the former. Indeed, it can result in tacit social endorsement of abuse in rare-but-not-extremely-rare sorts of situations.
Neither ‘blame’ nor ‘fault’ are anywhere in my comment.
And that’s the point: your perspective requires the group to assign blame, to adjudicate fault, to take sides. Mine does not. In my solution, the group treats what has transpired as something that’s between Alex and Bradley. The group takes no position on it. Alex now proclaims an inability to be in the same room as Bradley? Well, that’s unfortunate for Alex, but why should that affect the group’s relationship with Bradley? Alex has this problem; Alex will have to deal with it.
To treat the matter in any other way is to take sides.
You say:
How can this be? By (your own) construction, Bradley is fine with things proceeding just as they always have, w.r.t. the group’s activities. Bradley makes no impositions; Bradley asks for no concessions; Bradley in fact neither does nor says anything unusual or unprecedented. If Alex were to act exactly as Bradley is acting, then the group might never even know that anything untoward had happened.
Once again: it may be right and proper for a group to take one person’s side in a conflict. (Such as in your ‘abuse’ example.) But it is dishonest, dishonorable, and ultimately corrosive to the social fabric, to take sides while pretending to be impartial.
I think it’s intellectually dishonest to write:
… and then say “Neither ‘blame’ nor ‘fault’ are anywhere in my comment.” I smell a motte-and-bailey in that. There’s obviously a difference between blame games and fault analysis (in the former, one assigns moral weight and docks karma from a person’s holistic score; in the latter, one simply says “X caused Y”). But even in the dispassionate fault analysis sense, it strikes me as naive to claim that Alex’s reaction is—in ALL cases that don’t involve overt abuse—entirely a property of Alex and is entirely Alex’s responsibility.
I think you’re misunderstanding what I’m saying.
You seem to think that I’m claiming something like “it’s Alex’s fault that Alex feels this way”. But I’m claiming no such thing. In fact, basically the entirety of my point is that (in the “impartiality” scenario), as far as the group is concerned, it’s simply irrelevant why Alex feels this way. We can even go further and say: it’s irrelevant what Alex does or does not feel. Alex’s feelings are Alex’s business. The group is not interested in evaluating Alex’s feelings, in judging whether they are reasonable or unreasonable, in determine whether Alex is at fault for them or someone else is, etc. etc.
What I am saying is that Alex—specifically, Alex’s behavior (regardless of what feelings are or are not the cause of that behavior)—manifestly is the source of the problem for the group; that problem being, of course, “we now have to deal with one of our members refusing to be in the same room with another one of our members”.
As soon as you start asking why Alex feels this way, and whose fault is it that Alex feels this way, and whether it is reasonable for Alex to feel this way, etc., etc., you are committing yourself to some sort of side-taking. Here is what neutrality would look like:
Alex, to Group [i.e. spokesmember(s) thereof]: I can no longer stand to be in the same room as Bradley! Any event he’s invited to, I will not attend.
Group: Sounds like a bummer, man. Bradley’s invited to all public events, as you know (same as everyone else).
Alex: I have good reasons for feeling this way!
Group: Hey, that’s your own business. It’s not our place to evaluate your feelings, or judge whether they’re reasonable or not. Whether you come to things or not is, as always, your choice. You can attend, or not attend, for whatever reasons you like, or for no particular reason at all. You’re a free-willed adult—do what you think is best; you don’t owe us any explanations.
Alex: But it’s because…
Group (interrupting): No, really. It’s none of our business.
Alex: But if I have a really good reason for feeling this way, you’ll side with me, and stop inviting Bradley to things… right??
Group: Wrong.
Alex: Oh.
Separately:
Responsibility is one thing, but Alex’s reaction is obviously entirely a property of Alex. I am perplexed by the suggestion that it can be otherwise.
Yeah but you can’t derive fault from property, because by your own admission your model makes no claim of fault. At most you can say that Alex is the immediate causal source of the problem.
Who ever claimed otherwise?
Ah, but who will argue for the “Alex’s” who were genuinely made uncomfortable by the proposed norms of Dragon’s Army—perhaps to the point of disregarding even some good arguments and/or evidence in favor of it—and who are now being conflated with horribly abusive people as a direct result of this LW2 post? Social discomfort can be a two-way street.
I disagree with your summary and frame, and so cannot really respond to your question.
So this parenthetical-within-the-parenthetical didn’t help, huh?
I guess one might not have had a clear picture what Duncan was counting as constructive criticism.
There were people like that, but there were also people who talked about the risks without sending ally-type signals of “but this is worth trying” or “on balance this is a good idea” who Duncan would then accuse of “bad faith” and “strawmanning,” and would lump in with the people you’re thinking.
I request specific quotations rather than your personal summary. I acknowledge that I have not been providing specific quotations myself, and have been providing my summary; I acknowledge that I’m asking you to meet a standard I have yet to meet myself, and that it’s entirely fair to ask me to meet it as well.
If you would like to proceed with both of us agreeing to the standard of “provide specific quotations with all of the relevant context, and taboo floating summaries and opinions,” then I’ll engage. Else, I’m going to take the fact that you created a brand-new account with a deliberately contrarian title as a signal that I should not-reply and should deal with you only through the moderation team.
Thank you Qiaochu_Yuan for this much-needed clarification! It seems kinda important to address this sort of ambiguity well before you start casually talking about how ‘some views’ ought to be considered unacceptable for the sake of our community. (--Thus, I think both habryka and Duncan have some good points in the debate about what sort of criticism should be allowed here, and what standards there should be for the ‘meta’ level of “criticizing critics” as wrongheaded, uncharitable or whatever.)
I don’t understand what this is referring to. This discussion was always about epistemic norms, not object-level positions, although I agree that this could have been made clearer. From the OP:
To be clear, I’m also unhappy with the way Duncan wrote the snark paragraph, and I personally would have either omitted it or been more specific about what I thought was bad.
This is a fully-general-counterargument to any sort of involvement by people with even middling real-world concerns in LW2 - so if you mean to cite this remark approvingly as an example of how we should enforce our own standard of “perfectly rational” epistemic norms, I really have to oppose this. It is simply a fact about human psychology that “things like argument and evidence” are perhaps necessary but not sufficient to change people’s minds about issues of morality or politics that they actually care about, in a deep sense! This is the whole reason why Bernard Crick developed his own list of political virtues which I cited earlier in this very comment section. We should be very careful about this, and not let non-central examples on the object level skew our thinking about these matters.