I don’t think the mantle is or should be “neutral.” LW is a community with a purpose; neutrality is a death knell; it’s important that people have a strong, principled distinction between “what we do here” and “what we do not do here.”
Like, it’s my sense that there’s too much neutrality, and as a result too much that is not at all in the spirit of LessWrong is being tolerated, to the detriment of the overall project.
----------------------------------
The problem with “you are criticizing me wrong” is that, statistically, empirically, it’s a tool that is used wrong, and shows up in the hands of people who aren’t acting fully in good faith. Like, when the tool is made socially available, people will hide behind it in cases where it isn’t true.
But LessWrong is a) about figuring out what’s true/false and right/wrong, so this is a valuable domain of practice, and b) is, both in its mission and in the makeup of its membership, less likely to have problems in that domain.
Obviously, “less likely” doesn’t take us anywhere near zero. Obviously, if we flipped a switch and everyone felt that “you are criticizing me wrong” was a valid sentence that they were free to say, we’d have a bunch of problems, and no small number of those specific instances would turn out to be motivated by status or monkey politics or bad epistemics or defensiveness.
But that would kick off the double cruxes, with the right moderation and attention to detail. Like, once somebody says “you’re criticizing me according to values that I think are incorrect,” you’re having the meta-level conversation that was alluded to in the punch bug post (where e.g. Christians and atheists have different standards for whether you can believe things without evidence).
It’s not like those thoughts aren’t happening anyway, under the surface—people are dismissing one another left and right as “not getting it” or “not acting in good faith” or “having bad epistemics.” The problem is, there’s no path for them to bring that into the conversation.
I don’t think we get there by just jumping in with both feet.
I agree that we don’t get there by just jumping in with both feet.
You can’t make the culture better by just mimicking the symptoms of a good culture, with no generators behind them.
But I think it should absolutely be a target of this community, that it does not matter whose mouth the true words or the valid questions are coming out of. If a thing is true, or a question is pointing at real uncertainty, then anyone should be able to say/ask it.
It’s fine to have the hypothesis, based on reasonable priors, that a given person saying “you’re criticizing me wrong” is just being defensive or whatever. But it’s not fine to just make that unsayable and dismiss them out of hand. Even if 80% of the marbles are red, and therefore red is the safe bet for the next marble to pop out of the bag, some of the marbles are actually green.
I think it’s fine for participants to engage this way.
If a moderator gets embroiled in a disagreement where one side is saying “You’re criticizing me wrong” vs “I’m trying to criticize you for X.” Then this can get real awkward.
If the criticism itself has (potentially) some truth or validity, but the moderator doesn’t acknowledge any of that and instead keeps trying to have a conversation about how the criticism is wrong/improper by LW’s standards, then the way this looks is:
a) A moderator is trying to dodge being criticized
b) They are using the mantle of “upholding LW’s standards” to hide behind and dodging double cruxing at the object level
c) They aren’t acknowledging the overall situation, and so it’s unclear whether the mod is aware of how this all looks and whether they’re doing it on purpose, or if they’re feeling defensive and using principles to (subconsciously) dodge criticism
Here, it is valid to care about more than just whether the mod is technically correct about the criticism’s wrongness! The mod might be correct on the points they’re making. But they’re also doing something weird in the conversation, where it really seems like they’re trying to dodge something. Possibly subconsciously. And the viewers are left to wonder whether that’s actually happening or if they’re mistaken. But it’s awkward for a random viewer to try to “poke the bear” here, given the power differential.
Even worse, if someone does try to “poke the bear” and the mod reacts by denying any accusations of motivated reasoning, but continuing to leave the dynamic unacknowledged and then claiming that this is a culture that should be better than that.
In my head, it is obvious why this is all bad for a mod to do. So I didn’t explain quite why it’s bad. I can try if someone asks.
(My model of Duncan says that he would have preferred to tap out, but that he didn’t trust anyone else to pick up the flag of the things he cared about, and he perceived the cost of no one defending the important things was larger than the cost of him being in the position of being both the object of discussion and the purveyor of standards)
I’m only opposed to tagging out in worlds where it seems like literally no one else will hold the line. In worlds where there’s a mod team that’s dedicated to firm norms, I’m enthusiastic about other people benching me if it’s reasonable to assume I’m emotionally compromised.
My preference ordering is: [sane team] > [one person holding the line even if compromised and doing it imperfectly] >> [not having the line]
It would look less like you were emotionally compromised if you tried to do the double crux thing in addition to pointing out the norms violations. E.g., “I think you’re over the line in these ways. [List of ways] But, if you did have some truth to what you’re saying, would it be this? [attempt at understanding their argument / what they are trying to protect]”
(Maybe you have done this, and I missed it.)
But if you haven’t done this, why not?
Alternatively, another move would be, “I feel ___ about engaging with your arguments because they strike me as really uncharitable to the post. Instead I would like to just call out what I think are a list of norms you are violating, which are important to me for wanting to engage with your points.”
^This calls to the fact you are avoiding engaging with the critique on your post. (There are plenty of other ways to do this, I just gave one possible example.)
Does that move seem reasonable / executable?
(I’m noticing that if you felt you “should” do these things, it would be an unreasonable pressure. I think you are absolutely NOT obligated to engage in these ways. I’m pointing at these moves because they would cause me, and likely others, to respect you more in the arena of online debate. I already respect you plenty in lots of other arenas, so. This is like extra?)
There’s an error I perceive you as persistently making which I don’t think I can describe succinctly (I have a blogpost coming up that will attempt to delve into it), but, well, I dunno here goes anyway woooo.
I’ve run into what I perceive as the same error mode with Oli, Benquo, Ialdabaoth from time to time. Basically, most of the time that a rationalist thinks “someone has to be the only sane person in the room” thing, the error mode comes up.
It ties in closely with the thing you said recently about “one of is level N, and the other of us is level N − 1, and neither of us can be sure which is which”, and has do with you not noticing that was what was going on, and being way more confident than was warranted that you are the one on level N, and not doing any of the conversational moves that I think are necessary to account for our collective subjective uncertainty.
A related bit of evidence here is the thing where you perceived your recent moderation post as addressing your core cruxes, but it didn’t actually address any of my cruxes (not 100% sure about other mods), which is evidence against your ability to pre-emptively pass people’s ITT’s sufficiently to do the particular style of doublecruxing that it seems like you’re trying to do.
I’ve felt consistently like you round my criticisms off to rounding your points off to something more easily stereotypable.
Sometimes it’s necessary to be the only sane person in the room and speak out and fight the fight nobody else is fighting, but at least in room full of rationalists, if everyone is disagreeing with you, you don’t get to skip to the part where you say “guys this is just obvious didn’t we already agree on this when we endorsed the Sequences?”. It it were obvious, lots of people wouldn’t be disagreeing with you. You need to go through the steps where we actually get on the same page, and maybe you’re actually just wrong about the thing.
This is what I was trying to say the last time we had an in-person conversation about this (I have no idea if I did a remotely good job at actually saying it).
[1] context: the last thing Duncan and I said in private to each other was “sure seems like we should have an in-person conversation about this because doing it online in public probably won’t end well”, which I still basically believe but since the conversation is essentially going on in public *anyway* it felt important to say this thing. I still prefer talking in person or otherwise refactoring the conversation before going forward
Just as people in the second Dragon Army thread spent hundreds and hundreds of words criticizing my three paragraphs of snarky othering of trolls, but could not be bothered to spare a sentence to decry the behavior I was responding to and defending myself from …
… so, too, are you happy to write a four-hundred-word cruxless meander that leaves me no concrete threads to pull on, about how I’m chasing the wrong Polaris or employing the wrong norms or prioritizing things badly, and meanwhile it’s been nine days and Ben’s overtly libelous mis-summarization of me as calling for the creation of ghettos doesn’t deserve a SINGLE. PUBLIC. WORD. in response, from you. It just sits there, happily upvoted into positive territory, tacitly endorsed, continuing to be read by people in its original context, sans moderation.
Where’s the four hundred words on that, Ray Arnold?
“I asked Professor Quirrell why he’d laughed,” the boy said evenly, “after he awarded Hermione those hundred points. And Professor Quirrell said, these aren’t his exact words, but it’s pretty much what he said, that he’d found it tremendously amusing that the great and good Albus Dumbledore had been sitting there doing nothing as this poor innocent girl begged for help, while he had been the one to defend her. And he told me then that by the time good and moral people were done tying themselves up in knots, what they usually did was nothing; or, if they did act, you could hardly tell them apart from the people called bad. Whereas he could help innocent girls any time he felt like it, because he wasn’t a good person. And that I ought to remember that, any time I considered growing up to be good.”
Those things are blocked on having a conversation with Ben, not on the amount of time and attention available. I don’t think this small essay traded off in any meaningful way against writing things on the thread, both in terms of literal calendar time, and I also expect they cut into very different motivation/energy buckets for Ray (i.e. my guess is that the above essay was Ray exploring into a relatively low-effort direction, whereas reviving the whole Benquo thread is definitely a high-stress and high-effort option, and also probably a bad idea an hour before Ray is scheduled to talk to Ben).
(More thoughts on the meta-level here, but want to think a bit more about those before posting)
The point is, Maslow’s hierarchy of needs. I read Ray as requesting that I open up to criticism and consider subtle points, and meanwhile it seems like none of you take it seriously that having public, upvoted libel against me standing unobjected-to on LessWrong is an active and ongoing hurt/threat—that your platform is being used to make my life worse.
Like, seriously? Nine days and not even a single word in public response (in the place where the damage is actually occurring)? How hard is it to say “I’m going to talk to Ben about this in private, but for the moment I want to register that this does not match my understanding of Duncan’s beliefs”? That’s not an attack on Ben at all.
Eli recently reminded me of the importance of summarizing the other person’s perspective before responding, so let me start with my current model of where you are coming from. Sadly, my model of Eli only kicked in after I spent an hour writing this comment, so my summary of your current perspective will not be as integrated into the comment overall as I would like to. But here it goes anyways:
You experienced multiple comments by Benquo on the link post of the Punch-Buggy post to be clearly violating various rules of good conduct.
You think that while the LessWrong moderators have made some comments highlighting their reservations about that, their responses so far do definitely not constitute a proper response to the violation in a way that upholds the standards you think LessWrong should uphold
You are somewhat uncertain of whether that is because the moderators do not think those were norm violations, they think they are norm violations but do not need to be urgently responded to, or whether they think they have sufficiently responded to the norm violations already
You are aware that we have a private conversation with Benquo scheduled, but do not think this is sufficient reason to hold off on creating common knowledge on the relevant thread about his comments that you perceived as clearly norm violating
You are aware that we responded to some of his comments, but also think that there are multiple open threads that have not been sufficiently responded to, and that it is important to respond to all of them, and that just partially responding to them is not enough
Ray’s comment above seemed bad to you under multiple interpretations of his motivations: 1. If Ray is not commenting on the Benquo thread because he is waiting for the private conversations to resolve, then commenting on this thread and criticizing you is showing a clear asymmetric preference of not extending the same courtesy of cease-fire to you. 2. If Ray thinks responding to this thread is more important than responding to Benquo’s comments, then he is clearly mistaken about the relative magnitude of the norm violations 3. If Ray is responding to this because it is easy, and not responding to the Benquo thread because it is hard, then that shows a lack of awareness of your current attitude towards this discussion, which you’ve made clear multiple times by saying that you want the Benquo thread to resolve before you think it is time to engage with the details of this conversation
Let me know if I misrepresented you in any significant way in the summary above. I wrote the below based on that model of yours:
---
After checking the comments again on the relevant thread, it does seem like there is not a comment in that particular place saying that we have a chat with Ben scheduled. It seems correct to me to add that. My epistemic state was that we had written such a comment, and I was surprised to find we did not. After noticing this, I talked to Ray, who had a specific reason for not commenting (which was that he didn’t want unnecessarily put pressure on the outcome of his private conversation with Ben, given the already tense circumstances), which seemed reasonable, but I think was overall the wrong call.
I think it is good policy to do that in general, and am at least personally planning to do so in the future. I do think there are quite a few complicating factors in play here that make me think the decision to not comment on all of the comments of Benquo I saw as problematic, is a pretty reasonable one. We stepped in pretty early in the thread, and said we had various issues with Benquos comments. We mentioned here multiple times to you that we would come back to the thread only after we talked to Ben in person. In general it seems like good form to not escalate a thread again after you scheduled a meeting with someone to discuss the relevant thread. I think the comment you proposed mostly avoids escalating the thread, though I would not be that surprised if it would still end up doing so.
I can definitely assure you that I have a large and big open loop to respond and wrap up the Benquo thread, that I am taking the ongoing damage seriously, and have spent something like 12 hours over the last week talking to various people about the best way to resolve this. I would have preferred to wrap it up earlier, but it took a while until Ben had the time to schedule a proper one-on-one conversation.
There is a general thing where the higher the stakes of the case are, the longer the investigation and negotiation will take. In this case, the conflict seemed to be quite massive, involving a large number of people, many of which threatened to abandon LessWrong or take similarly drastic action, based on our decisions and actions. I advocated for taking the time to resolve this properly, and that the first step towards doing so would be private conversations with the relevant parties. We scheduled the conversation the day after Benquo wrote his comments, and the conversation was originally scheduled two or three days afterwards. However, something urgent came up for Benquo on the day off, and so we had to delay the conversation for another five days.
If I understand the situation correctly Ray talked to you a few days ago and said that he wants to wait until he was done talking to Benquo before taking further action. I mostly see the thing Ray brought up in this thread as a tangent off of the main topic that was not of similar importance as dealing with the main thread, but that wasn’t blocked on anything outside of our immediate control. As such, I modeled your epistemic state as knowing that things were on hold, and that eventually the public record would get a correction as soon as we had the necessary private conversations, and did not expect you to perceive Ray’s comment as defecting on that.
Overall, and I do think this is something that your moderation post made me more aware of, I think that we should aim for a greater coverage of dealing with norm violations on LessWrong (i.e. your idea of “every comment should get checked off by a moderator”). The primary way I want to deal with this is by trying to make sure the public record is *eventually* set right. I don’t think our current available moderation resources allow us to respond to everything immediately, or even comparably fast, especially with the additional constraint that me and Ray are still trying to get software development work done on the site, which does not combine well with working on moderation (and with Ben Pace currently being out of commission for university stuff). This balance of responsibilities inevitably means that sometimes it will take a few days for us to have the time to respond, and if you combine that with the difficulties of scheduling in-person meetings, I think nine days is not completely unreasonable.
Just to be clear, we have definitely not dropped the ball on this. I do think we dealt with this situation in a way that wasn’t as publicly transparent as I would have liked on reflection, but we did not at any point consider this whole thread dealt with, or stopped working on it. In a lower stakes environment, with less polarization and less ways of making everything explode horribly, I think we would have responded to the situation much quicker, and mostly in line with what you would have wanted us to do.
However, I reiterate: it does not seem like the negative value of damage done in the time while an inappropriate comment is sitting there wholly unaddressed is taken seriously; the implicit model seems to be “everyone who saw it upvoted and not objected-to will also read and understand the corrected record later, thus reducing all lasting damage to zero,” and that does not seem at all true to me.
I’m only unopposed to tagging out in worlds where it seems like literally no one else will hold the line. In worlds where there’s a mod team that’s dedicated to firm norms, I’m enthusiastic about other people benching me if it’s reasonable to assume I’m emotionally compromised.
I am somewhat confused here. Did it not seem to you like you could recruit help, or that an unsuccessful attempt at recruiting help would be informative about what’s right? (That is, I expect it to be much easier for Alice to discuss Bob’s post with Carol, in the hopes that Carol will hold the line for Alice, than for Alice to discuss Bob’s post with Bob, and I think an important skill for Alice is determining when she is emotionally compromised such that this additional step is warranted.)
From my perspective, given prior experience in 1) the original DA post, 2) the DA retrospective, 3) the meta thread that split off of the MTG post, and 4) the long interaction with Ben on Facebook back in maybe June of last year, I believe I am justified in expecting that ~no one else will expend effort on the standards that I think are important. Occasionally you or Qiaochu poke your heads in and offer a measured, moderate endorsement, but it’s usually (again, just according to me) an order of magnitude smaller than the eroding forces.
I note that I have heard from more than three people that they feel constrained in responding for other reasons, which makes me expect that the lack of other people carrying the flag is not entirely explained by me simply being wrong.
I note that I have heard from more than three people that they feel constrained in responding for other reasons, which makes me expect that the lack of other people carrying the flag is not entirely explained by me simply being wrong.
To be clear, an example of an “attempt at recruiting help” is sending a comment link to Qiaochu followed by “I am enraged by this comment, how should I respond to it?”. Perhaps Qiaochu says “yeah, X, Y, and Z about this comment are bad”, and then you see whether or not Qiaochu will write the thing; perhaps Qiaochu says “hmm, this comment seems mixed” and the two of you work through it together; perhaps Qiaochu says “I think this makes a solid point, what about it enrages you?” and you figure out the core disagreements in a more friendly environment. If there’s an obstacle of the form “I don’t know how to criticize point X without opening up to attack / jeopardizing a different important thing,” you now have a label of the problem and multiple people to think about how to solve it.
I worry that you’re counting implicit recruitment; that is, you see a post that enrages you, you learn that I also saw that post, you imagine that constitutes an interaction where you ask for help dealing with the post. That’s not the sort of thing I’m imagining, and I would expect implicit recruitment to be insufficient, especially in situations of time pressure.
[edit]To be even clearer, the reason why this response is to the quoted section is because “unsuccessful attempt at recruiting help” is the version where Duncan talks it out with Qiaochu and can’t convince Qiaochu that the comment is bad, not the version where Duncan doesn’t even ask Qiaochu and Qiaochu doesn’t come to his aid.
But LessWrong is a) about figuring out what’s true/false and right/wrong, so this is a valuable domain of practice, and b) is, both in its mission and in the makeup of its membership, less likely to have problems in that domain.
I have come across similar arguments for why discussing politics on LW is worthwhile, and I didn’t find them convincing then. (It is also the case that politics is sort of about figuring out what’s true/false and right/wrong, and definitely the case that LW is less likely to have problems in that domain.) In order to establish that it’s actually worth it, it seems like you need to actually estimate the value and the cost, and it’s not obvious to me that we’re seeing the same costs. For example, one of the non-obvious costs of talking about politics on LW is that you attract people who are relatively more interested in politics than rationality, corroding the culture even if talking about politics actually leveled up the rationality of all of the previous users.
It does seem obvious to me that developing the skill to correctly assess whether a criticism is “wrong” is more valuable than developing the skill to correctly reason about political issues, but it’s not obvious to me that it’s more valuable than the varied costs to the community if this can always be a live point of discussion.
But I think it should absolutely be a target of this community, that it does not matter whose mouth the true words or the valid questions are coming out of. If a thing is true, or a question is pointing at real uncertainty, then anyone should be able to say/ask it.
(For context, Duncan and I have talked about this some in person, but didn’t really finish the conversation.) I still think this doesn’t engage with my point, which is that reading sentences is only indirectly a function from utterance to meaning. In order to determine the meaning of a sentence, I’m implicitly modeling the prior probability of many different meanings, the likelihood of many meaning → utterance mappings, and determining which meanings are most plausible given the utterances I read (or didn’t read). And it’s definitely the case that both the prior distribution and the likelihood distributions depend on whether the speaker is ‘first party’ or ‘second party’ or ‘third party’. On a trivial level, whether someone uses the word “I” or “Vaniver” depends a lot on whether they’re me or not me, but on a less trivial level, while both sentences “I am fair” and “Vaniver is fair” are semantically equivalent (if said by me), what you can infer about the world seems very different depending on whether I’m saying the first one or a third party is saying the second one.
I hear you as pushing for a world where you can write “I am fair” sentences and have them be evaluated identically to as if I wrote “Duncan is fair,” and I think that’s undesirable to the limited extent that it is possible.
---
I do think that it should be possible to write “I am fair” sentences, since sometimes they are relevant to a conversation and the best way forward, but it’s not obvious to me that the current cost to writing such sentences is incorrect.
This is _VERY_ difficult to agree on in the abstract, without real categorization and a strong appeal mechanism. Neutrality is harmful in small groups (because it takes a LOT more effort to identify bright lines and deal with the nitpickers), and absolutely necessary in large groups (because the privilege of judgement is so easily abused, and because there’s an explicit inclusiveness goal “this should work for almost everybody”).
One thing I think works well for small-but-intends-to-be-big: jump in with both feet, but make sure you’re collecting (and publishing) metrics about when guidelines are followed vs when judgement was applied, and what the outcome was. A well-defined appeal mechanism can also lend legitimacy, but that’s one step toward the expense of large-group management.
I don’t think the mantle is or should be “neutral.” LW is a community with a purpose; neutrality is a death knell; it’s important that people have a strong, principled distinction between “what we do here” and “what we do not do here.”
Like, it’s my sense that there’s too much neutrality, and as a result too much that is not at all in the spirit of LessWrong is being tolerated, to the detriment of the overall project.
----------------------------------
The problem with “you are criticizing me wrong” is that, statistically, empirically, it’s a tool that is used wrong, and shows up in the hands of people who aren’t acting fully in good faith. Like, when the tool is made socially available, people will hide behind it in cases where it isn’t true.
But LessWrong is a) about figuring out what’s true/false and right/wrong, so this is a valuable domain of practice, and b) is, both in its mission and in the makeup of its membership, less likely to have problems in that domain.
Obviously, “less likely” doesn’t take us anywhere near zero. Obviously, if we flipped a switch and everyone felt that “you are criticizing me wrong” was a valid sentence that they were free to say, we’d have a bunch of problems, and no small number of those specific instances would turn out to be motivated by status or monkey politics or bad epistemics or defensiveness.
But that would kick off the double cruxes, with the right moderation and attention to detail. Like, once somebody says “you’re criticizing me according to values that I think are incorrect,” you’re having the meta-level conversation that was alluded to in the punch bug post (where e.g. Christians and atheists have different standards for whether you can believe things without evidence).
It’s not like those thoughts aren’t happening anyway, under the surface—people are dismissing one another left and right as “not getting it” or “not acting in good faith” or “having bad epistemics.” The problem is, there’s no path for them to bring that into the conversation.
I don’t think we get there by just jumping in with both feet.
I agree that we don’t get there by just jumping in with both feet.
You can’t make the culture better by just mimicking the symptoms of a good culture, with no generators behind them.
But I think it should absolutely be a target of this community, that it does not matter whose mouth the true words or the valid questions are coming out of. If a thing is true, or a question is pointing at real uncertainty, then anyone should be able to say/ask it.
It’s fine to have the hypothesis, based on reasonable priors, that a given person saying “you’re criticizing me wrong” is just being defensive or whatever. But it’s not fine to just make that unsayable and dismiss them out of hand. Even if 80% of the marbles are red, and therefore red is the safe bet for the next marble to pop out of the bag, some of the marbles are actually green.
We need to be able to see those.
I think it’s fine for participants to engage this way.
If a moderator gets embroiled in a disagreement where one side is saying “You’re criticizing me wrong” vs “I’m trying to criticize you for X.” Then this can get real awkward.
If the criticism itself has (potentially) some truth or validity, but the moderator doesn’t acknowledge any of that and instead keeps trying to have a conversation about how the criticism is wrong/improper by LW’s standards, then the way this looks is:
a) A moderator is trying to dodge being criticized
b) They are using the mantle of “upholding LW’s standards” to hide behind and dodging double cruxing at the object level
c) They aren’t acknowledging the overall situation, and so it’s unclear whether the mod is aware of how this all looks and whether they’re doing it on purpose, or if they’re feeling defensive and using principles to (subconsciously) dodge criticism
Here, it is valid to care about more than just whether the mod is technically correct about the criticism’s wrongness! The mod might be correct on the points they’re making. But they’re also doing something weird in the conversation, where it really seems like they’re trying to dodge something. Possibly subconsciously. And the viewers are left to wonder whether that’s actually happening or if they’re mistaken. But it’s awkward for a random viewer to try to “poke the bear” here, given the power differential.
Even worse, if someone does try to “poke the bear” and the mod reacts by denying any accusations of motivated reasoning, but continuing to leave the dynamic unacknowledged and then claiming that this is a culture that should be better than that.
In my head, it is obvious why this is all bad for a mod to do. So I didn’t explain quite why it’s bad. I can try if someone asks.
Yeah. This is a good place for mods to have a system of tag-outs, and maybe a norm of metacog-ing for one another, as you indicated in the OP with
Weird, I was expecting you to disagree. I was trying to illustrate what I thought you were missing in your own arguments around this.
In the disputes I’ve seen you engage in, this is kind of what it looks like is happening. (Except you’re not a mod, just the author of the post.)
(My model of Duncan says that he would have preferred to tap out, but that he didn’t trust anyone else to pick up the flag of the things he cared about, and he perceived the cost of no one defending the important things was larger than the cost of him being in the position of being both the object of discussion and the purveyor of standards)
I’m only opposed to tagging out in worlds where it seems like literally no one else will hold the line. In worlds where there’s a mod team that’s dedicated to firm norms, I’m enthusiastic about other people benching me if it’s reasonable to assume I’m emotionally compromised.
My preference ordering is: [sane team] > [one person holding the line even if compromised and doing it imperfectly] >> [not having the line]
Yay, I get Bayes Points!
(I wrote my above reply before seeing this one, since I hadn’t refreshed the page in a while)
That makes sense.
It would look less like you were emotionally compromised if you tried to do the double crux thing in addition to pointing out the norms violations. E.g., “I think you’re over the line in these ways. [List of ways] But, if you did have some truth to what you’re saying, would it be this? [attempt at understanding their argument / what they are trying to protect]”
(Maybe you have done this, and I missed it.)
But if you haven’t done this, why not?
Alternatively, another move would be, “I feel ___ about engaging with your arguments because they strike me as really uncharitable to the post. Instead I would like to just call out what I think are a list of norms you are violating, which are important to me for wanting to engage with your points.”
^This calls to the fact you are avoiding engaging with the critique on your post. (There are plenty of other ways to do this, I just gave one possible example.)
Does that move seem reasonable / executable?
(I’m noticing that if you felt you “should” do these things, it would be an unreasonable pressure. I think you are absolutely NOT obligated to engage in these ways. I’m pointing at these moves because they would cause me, and likely others, to respect you more in the arena of online debate. I already respect you plenty in lots of other arenas, so. This is like extra?)
I think I have done this in multiple places.
I think I have not done this all the time.
I would expect myself to have a 70% hit rate.
I would not be shocked if my hit rate were as low as 50%.
I’ve just attempted to live up to this standard in the other thread, in a tangential reply to benquo.
[comments I may regret writing[1]]
There’s an error I perceive you as persistently making which I don’t think I can describe succinctly (I have a blogpost coming up that will attempt to delve into it), but, well, I dunno here goes anyway woooo.
I’ve run into what I perceive as the same error mode with Oli, Benquo, Ialdabaoth from time to time. Basically, most of the time that a rationalist thinks “someone has to be the only sane person in the room” thing, the error mode comes up.
It ties in closely with the thing you said recently about “one of is level N, and the other of us is level N − 1, and neither of us can be sure which is which”, and has do with you not noticing that was what was going on, and being way more confident than was warranted that you are the one on level N, and not doing any of the conversational moves that I think are necessary to account for our collective subjective uncertainty.
A related bit of evidence here is the thing where you perceived your recent moderation post as addressing your core cruxes, but it didn’t actually address any of my cruxes (not 100% sure about other mods), which is evidence against your ability to pre-emptively pass people’s ITT’s sufficiently to do the particular style of doublecruxing that it seems like you’re trying to do.
I’ve felt consistently like you round my criticisms off to rounding your points off to something more easily stereotypable.
Sometimes it’s necessary to be the only sane person in the room and speak out and fight the fight nobody else is fighting, but at least in room full of rationalists, if everyone is disagreeing with you, you don’t get to skip to the part where you say “guys this is just obvious didn’t we already agree on this when we endorsed the Sequences?”. It it were obvious, lots of people wouldn’t be disagreeing with you. You need to go through the steps where we actually get on the same page, and maybe you’re actually just wrong about the thing.
This is what I was trying to say the last time we had an in-person conversation about this (I have no idea if I did a remotely good job at actually saying it).
[1] context: the last thing Duncan and I said in private to each other was “sure seems like we should have an in-person conversation about this because doing it online in public probably won’t end well”, which I still basically believe but since the conversation is essentially going on in public *anyway* it felt important to say this thing. I still prefer talking in person or otherwise refactoring the conversation before going forward
I am extremely angry right now.
Just as people in the second Dragon Army thread spent hundreds and hundreds of words criticizing my three paragraphs of snarky othering of trolls, but could not be bothered to spare a sentence to decry the behavior I was responding to and defending myself from …
… so, too, are you happy to write a four-hundred-word cruxless meander that leaves me no concrete threads to pull on, about how I’m chasing the wrong Polaris or employing the wrong norms or prioritizing things badly, and meanwhile it’s been nine days and Ben’s overtly libelous mis-summarization of me as calling for the creation of ghettos doesn’t deserve a SINGLE. PUBLIC. WORD. in response, from you. It just sits there, happily upvoted into positive territory, tacitly endorsed, continuing to be read by people in its original context, sans moderation.
Where’s the four hundred words on that, Ray Arnold?
Those things are blocked on having a conversation with Ben, not on the amount of time and attention available. I don’t think this small essay traded off in any meaningful way against writing things on the thread, both in terms of literal calendar time, and I also expect they cut into very different motivation/energy buckets for Ray (i.e. my guess is that the above essay was Ray exploring into a relatively low-effort direction, whereas reviving the whole Benquo thread is definitely a high-stress and high-effort option, and also probably a bad idea an hour before Ray is scheduled to talk to Ben).
(More thoughts on the meta-level here, but want to think a bit more about those before posting)
The point is, Maslow’s hierarchy of needs. I read Ray as requesting that I open up to criticism and consider subtle points, and meanwhile it seems like none of you take it seriously that having public, upvoted libel against me standing unobjected-to on LessWrong is an active and ongoing hurt/threat—that your platform is being used to make my life worse.
Like, seriously? Nine days and not even a single word in public response (in the place where the damage is actually occurring)? How hard is it to say “I’m going to talk to Ben about this in private, but for the moment I want to register that this does not match my understanding of Duncan’s beliefs”? That’s not an attack on Ben at all.
Eli recently reminded me of the importance of summarizing the other person’s perspective before responding, so let me start with my current model of where you are coming from. Sadly, my model of Eli only kicked in after I spent an hour writing this comment, so my summary of your current perspective will not be as integrated into the comment overall as I would like to. But here it goes anyways:
You experienced multiple comments by Benquo on the link post of the Punch-Buggy post to be clearly violating various rules of good conduct.
You think that while the LessWrong moderators have made some comments highlighting their reservations about that, their responses so far do definitely not constitute a proper response to the violation in a way that upholds the standards you think LessWrong should uphold
You are somewhat uncertain of whether that is because the moderators do not think those were norm violations, they think they are norm violations but do not need to be urgently responded to, or whether they think they have sufficiently responded to the norm violations already
You are aware that we have a private conversation with Benquo scheduled, but do not think this is sufficient reason to hold off on creating common knowledge on the relevant thread about his comments that you perceived as clearly norm violating
You are aware that we responded to some of his comments, but also think that there are multiple open threads that have not been sufficiently responded to, and that it is important to respond to all of them, and that just partially responding to them is not enough
Ray’s comment above seemed bad to you under multiple interpretations of his motivations: 1. If Ray is not commenting on the Benquo thread because he is waiting for the private conversations to resolve, then commenting on this thread and criticizing you is showing a clear asymmetric preference of not extending the same courtesy of cease-fire to you. 2. If Ray thinks responding to this thread is more important than responding to Benquo’s comments, then he is clearly mistaken about the relative magnitude of the norm violations 3. If Ray is responding to this because it is easy, and not responding to the Benquo thread because it is hard, then that shows a lack of awareness of your current attitude towards this discussion, which you’ve made clear multiple times by saying that you want the Benquo thread to resolve before you think it is time to engage with the details of this conversation
Let me know if I misrepresented you in any significant way in the summary above. I wrote the below based on that model of yours:
---
After checking the comments again on the relevant thread, it does seem like there is not a comment in that particular place saying that we have a chat with Ben scheduled. It seems correct to me to add that. My epistemic state was that we had written such a comment, and I was surprised to find we did not. After noticing this, I talked to Ray, who had a specific reason for not commenting (which was that he didn’t want unnecessarily put pressure on the outcome of his private conversation with Ben, given the already tense circumstances), which seemed reasonable, but I think was overall the wrong call.
I think it is good policy to do that in general, and am at least personally planning to do so in the future. I do think there are quite a few complicating factors in play here that make me think the decision to not comment on all of the comments of Benquo I saw as problematic, is a pretty reasonable one. We stepped in pretty early in the thread, and said we had various issues with Benquos comments. We mentioned here multiple times to you that we would come back to the thread only after we talked to Ben in person. In general it seems like good form to not escalate a thread again after you scheduled a meeting with someone to discuss the relevant thread. I think the comment you proposed mostly avoids escalating the thread, though I would not be that surprised if it would still end up doing so.
I can definitely assure you that I have a large and big open loop to respond and wrap up the Benquo thread, that I am taking the ongoing damage seriously, and have spent something like 12 hours over the last week talking to various people about the best way to resolve this. I would have preferred to wrap it up earlier, but it took a while until Ben had the time to schedule a proper one-on-one conversation.
There is a general thing where the higher the stakes of the case are, the longer the investigation and negotiation will take. In this case, the conflict seemed to be quite massive, involving a large number of people, many of which threatened to abandon LessWrong or take similarly drastic action, based on our decisions and actions. I advocated for taking the time to resolve this properly, and that the first step towards doing so would be private conversations with the relevant parties. We scheduled the conversation the day after Benquo wrote his comments, and the conversation was originally scheduled two or three days afterwards. However, something urgent came up for Benquo on the day off, and so we had to delay the conversation for another five days.
If I understand the situation correctly Ray talked to you a few days ago and said that he wants to wait until he was done talking to Benquo before taking further action. I mostly see the thing Ray brought up in this thread as a tangent off of the main topic that was not of similar importance as dealing with the main thread, but that wasn’t blocked on anything outside of our immediate control. As such, I modeled your epistemic state as knowing that things were on hold, and that eventually the public record would get a correction as soon as we had the necessary private conversations, and did not expect you to perceive Ray’s comment as defecting on that.
Overall, and I do think this is something that your moderation post made me more aware of, I think that we should aim for a greater coverage of dealing with norm violations on LessWrong (i.e. your idea of “every comment should get checked off by a moderator”). The primary way I want to deal with this is by trying to make sure the public record is *eventually* set right. I don’t think our current available moderation resources allow us to respond to everything immediately, or even comparably fast, especially with the additional constraint that me and Ray are still trying to get software development work done on the site, which does not combine well with working on moderation (and with Ben Pace currently being out of commission for university stuff). This balance of responsibilities inevitably means that sometimes it will take a few days for us to have the time to respond, and if you combine that with the difficulties of scheduling in-person meetings, I think nine days is not completely unreasonable.
Just to be clear, we have definitely not dropped the ball on this. I do think we dealt with this situation in a way that wasn’t as publicly transparent as I would have liked on reflection, but we did not at any point consider this whole thread dealt with, or stopped working on it. In a lower stakes environment, with less polarization and less ways of making everything explode horribly, I think we would have responded to the situation much quicker, and mostly in line with what you would have wanted us to do.
I endorse this.
However, I reiterate: it does not seem like the negative value of damage done in the time while an inappropriate comment is sitting there wholly unaddressed is taken seriously; the implicit model seems to be “everyone who saw it upvoted and not objected-to will also read and understand the corrected record later, thus reducing all lasting damage to zero,” and that does not seem at all true to me.
I am somewhat confused here. Did it not seem to you like you could recruit help, or that an unsuccessful attempt at recruiting help would be informative about what’s right? (That is, I expect it to be much easier for Alice to discuss Bob’s post with Carol, in the hopes that Carol will hold the line for Alice, than for Alice to discuss Bob’s post with Bob, and I think an important skill for Alice is determining when she is emotionally compromised such that this additional step is warranted.)
From my perspective, given prior experience in 1) the original DA post, 2) the DA retrospective, 3) the meta thread that split off of the MTG post, and 4) the long interaction with Ben on Facebook back in maybe June of last year, I believe I am justified in expecting that ~no one else will expend effort on the standards that I think are important. Occasionally you or Qiaochu poke your heads in and offer a measured, moderate endorsement, but it’s usually (again, just according to me) an order of magnitude smaller than the eroding forces.
I note that I have heard from more than three people that they feel constrained in responding for other reasons, which makes me expect that the lack of other people carrying the flag is not entirely explained by me simply being wrong.
To be clear, an example of an “attempt at recruiting help” is sending a comment link to Qiaochu followed by “I am enraged by this comment, how should I respond to it?”. Perhaps Qiaochu says “yeah, X, Y, and Z about this comment are bad”, and then you see whether or not Qiaochu will write the thing; perhaps Qiaochu says “hmm, this comment seems mixed” and the two of you work through it together; perhaps Qiaochu says “I think this makes a solid point, what about it enrages you?” and you figure out the core disagreements in a more friendly environment. If there’s an obstacle of the form “I don’t know how to criticize point X without opening up to attack / jeopardizing a different important thing,” you now have a label of the problem and multiple people to think about how to solve it.
I worry that you’re counting implicit recruitment; that is, you see a post that enrages you, you learn that I also saw that post, you imagine that constitutes an interaction where you ask for help dealing with the post. That’s not the sort of thing I’m imagining, and I would expect implicit recruitment to be insufficient, especially in situations of time pressure.
[edit]To be even clearer, the reason why this response is to the quoted section is because “unsuccessful attempt at recruiting help” is the version where Duncan talks it out with Qiaochu and can’t convince Qiaochu that the comment is bad, not the version where Duncan doesn’t even ask Qiaochu and Qiaochu doesn’t come to his aid.
I’ve definitely made the mistake I think you’re describing, in discussions of Duncan’s stuff.
I have come across similar arguments for why discussing politics on LW is worthwhile, and I didn’t find them convincing then. (It is also the case that politics is sort of about figuring out what’s true/false and right/wrong, and definitely the case that LW is less likely to have problems in that domain.) In order to establish that it’s actually worth it, it seems like you need to actually estimate the value and the cost, and it’s not obvious to me that we’re seeing the same costs. For example, one of the non-obvious costs of talking about politics on LW is that you attract people who are relatively more interested in politics than rationality, corroding the culture even if talking about politics actually leveled up the rationality of all of the previous users.
It does seem obvious to me that developing the skill to correctly assess whether a criticism is “wrong” is more valuable than developing the skill to correctly reason about political issues, but it’s not obvious to me that it’s more valuable than the varied costs to the community if this can always be a live point of discussion.
(For context, Duncan and I have talked about this some in person, but didn’t really finish the conversation.) I still think this doesn’t engage with my point, which is that reading sentences is only indirectly a function from utterance to meaning. In order to determine the meaning of a sentence, I’m implicitly modeling the prior probability of many different meanings, the likelihood of many meaning → utterance mappings, and determining which meanings are most plausible given the utterances I read (or didn’t read). And it’s definitely the case that both the prior distribution and the likelihood distributions depend on whether the speaker is ‘first party’ or ‘second party’ or ‘third party’. On a trivial level, whether someone uses the word “I” or “Vaniver” depends a lot on whether they’re me or not me, but on a less trivial level, while both sentences “I am fair” and “Vaniver is fair” are semantically equivalent (if said by me), what you can infer about the world seems very different depending on whether I’m saying the first one or a third party is saying the second one.
I hear you as pushing for a world where you can write “I am fair” sentences and have them be evaluated identically to as if I wrote “Duncan is fair,” and I think that’s undesirable to the limited extent that it is possible.
---
I do think that it should be possible to write “I am fair” sentences, since sometimes they are relevant to a conversation and the best way forward, but it’s not obvious to me that the current cost to writing such sentences is incorrect.
This is _VERY_ difficult to agree on in the abstract, without real categorization and a strong appeal mechanism. Neutrality is harmful in small groups (because it takes a LOT more effort to identify bright lines and deal with the nitpickers), and absolutely necessary in large groups (because the privilege of judgement is so easily abused, and because there’s an explicit inclusiveness goal “this should work for almost everybody”).
One thing I think works well for small-but-intends-to-be-big: jump in with both feet, but make sure you’re collecting (and publishing) metrics about when guidelines are followed vs when judgement was applied, and what the outcome was. A well-defined appeal mechanism can also lend legitimacy, but that’s one step toward the expense of large-group management.