[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]
I don’t want to concentrate on the question of which is “worse”; it is hard to even start thinking about that without discussing facts on the ground and general social models that would apply to both cases.
I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.
But then the post repeatedly (in every section!) makes reference to Zoe’s post, comparing her experience at Leverage to your (and others’) experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!
Some more or less randomly chosen examples (ctrl-f “Leverage” or “Zoe” for lots more):
Zoe begins by listing a number of trauma symptoms she experienced. I have, personally, experienced most of those on the list of cult after-effects in 2017, even before I had a psychotic break.
...
Zoe further talks about how the experience was incredibly confusing and people usually only talk about the past events secretively. This matches my experience.
...
Zoe discusses an unofficial NDA people signed as they left, agreeing not to talk badly of the organization. While I wasn’t pressured to sign an NDA, there were significant security policies discussed at the time (including the one about researchers not asking each other about research).
...
Like Zoe, I experienced myself and others being distanced from old family and friends, who didn’t understand how high-impact the work we were doing was.
If the goal is just to clarify what happened and not at all to blame or compare, then why not...just state what happened at MIRI/CFAR without comparing to the Leverage case, at all?
You (Jessica) say, “I will be noting parts of Zoe’s post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.” But in that case, why not use her post as a starting point for organizing your own thoughts, but then write something about MIRI/CFAR that stands on its own terms?
. . .
To answer my own question...
My guess is that you adopted this essay structure because you want to argue that the things that happened at Leverage were not a one-off random thing, they were structurally (not just superficially) similar to dynamics at MIRI / CFAR. That is, there is a common cause in of similar symptoms, between those two cases.
If so, my impression is that this essay is going too fast, by introducing a bunch of new interpretation-laden data, and fitting that data into a grand theory of similarity between Leverage and MIRI all at once. Just clarifying the facts about what happened is a different (hard) goal than describing the general dynamics underlying those events. I think we’ll make more progress if we do the first, well, before moving on to the second.
In effect, because the data is presented as part of some larger theory, I have to do extra cognitive work to evaluate the data on its own terms, instead of slipping into the frame of evaluating whether the larger theory is true or false, or whether my affect towards MIRI should be the same as my affect toward Leverage, or something. It made it harder instead of easier for me to step out of the frame of blame and “who was most bad?”.
This feels especially salient because a number of the specific criticisms, in my opinion, don’t hold up to scrutiny, but this is obscured by the comparison to Leverage.
Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of “having a culture of self improvement and debugging”, and also versions that are harmful.
For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.
For instance,
Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR’s self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.
Assuming that for a moment that my assessment about CFAR is true (of course, it might not be), your comparing debugging at CFAR to debugging at Leverage is confusing to the group cognition, because they have been implicitly lumped together.
Now, more people’s estimation of CFAR’s debugging culture will rise or fall with their estimation of Leverage’s debugging culture. And recognizing this, consciously or unconsciously, people are now incentivized to bias their estimation of one or of the other (because they want to defend CFAR, or defend Leverage, or attack CFAR, or attack Leverage).
I’m under this weird pressure, because if I state “Anna debugging with me while I worked at CFAR might seems bad, but it was actually mostly innocuous” is kind of awkward, because this seems to be implying that that what happened at Leverage was also not so bad.
And on the flip side, I’ll feel more cagey about talking about the toxic elements of CFAR’s debugging culture, because in context, that seems to be implying that it was as bad as Zoe’s account of Leverage.
“Debugging culture” is just one example. For many of these points, I think further investigation might show that the thing that happened at one org was meaningfully different from the thing that happened at the other org, in which case, bucketing them together from the getgo seems counterproductive to me.
Drawing the parallels between MIRI/CFAR and Leverage, point by point, makes it awkward to consider each org’s pathologies on it’s own terms. It makes it seem like if one was bad, then the other was probably bad too, even though it is at least possible that one org had mostly healthy versions of some cultural elements and the other had mostly unhealthy versions of similar elements, or (even more likely) they each had a different mix of pathologies.
I contend that if the goal is to get clear on the facts, we want to do the opposite thing: we want to, as much as possible, consider the details of the cases independently, attempting to do original seeing, so that we can get a good grasp of what happened in each situation.
And only after we’ve clarified what happened might we want to go back and see if there are common dynamics in play.
Ok. After thinking further and talking about it with others, I’ve changed my mind about the opinion that I expressed in this comment, for two reasons.
1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, “write off Leverage as reprehensible, treat it as ‘an org that we all know is bad’, and move on, while feeling good about our selves for not being bad they way that they were”.
Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)
If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.
2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe’s post, both in terms of the deliberateness of the bad dynamics and the magnitude the harm they caused.
I think that talking about MIRI or CFAR is mostly a distraction from understanding what happened at Leverage, and what things anyone here should do next. However, there are some similarities between Leverage on the one hand and CFAR or MIRI on the other, and Jessica had some data about the latter which might be relevant to people’s view about Leverage.
Basically, there’s an epistemic processing happening in these comments and on general principles, it is better for people to share info that they think is relevant, so that the epistemic process has the option of disregarding it or not.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I think that both of these views are incorrect simplifications. But I think that the second story is less accurate than the first, and so I think it is a cost if Jessica’s post promotes the second view. I have some annoyance about that.
However, I think that we mostly shouldn’t be in the business of trying to cater to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I still wish that this post had been written differently in a number of ways (such as emphasizing more strongly that in Jessica’s opinion management in corporate America is worse than MIRI or Leverage), but I acknowledging that writing such a post is hard.
I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
However, I think that we mostly shouldn’t be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
I appreciate this comment, especially that you noticed the giant upfront paragraph that’s relevant to the discussion :)
One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they’d be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn’t promote it on Twitter except to retweet someone who was already tweeting about it. I don’t think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.
Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I’m not saying I acted optimally, just, I don’t see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.
Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”
I think that’s literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.
I think that’s backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say “I’m not trying to punish them, I just want to talk freely about some harms.”
By pretending that you’re not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of “but I was just trying to talk about what’s going on. I specifically said not to punish any one!”
and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
This also seems to strong too me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
This also seems to strong to me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.
I retracted this comment, because reading all of my comments here, a few years later, I feel much more compelled by my original take than by this addition.
I think the addition points out real dynamics, but that those dynamics don’t take precedence over the dynamics that I expressed the first place. Those seem higher priority to me.
This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.
I think the feeling that this kind of argument is fair is a kind of motivated cognition that’s motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won’t be doing.
[Edit: I want to note that this is represents only a fraction of my overall feelings and views on this whole thing.]
I feel some annoyance at this sentence. I appreciate the stated goal of just trying to understand what happened in the different situations, without blaming or trying to evaluate which is worse.
But then the post repeatedly (in every section!) makes reference to Zoe’s post, comparing her experience at Leverage to your (and others’) experience at MIRI/CFAR, taking specific elements from her account and drawing parallels to your own. This is the main structure of the post!
Some more or less randomly chosen examples (ctrl-f “Leverage” or “Zoe” for lots more):
If the goal is just to clarify what happened and not at all to blame or compare, then why not...just state what happened at MIRI/CFAR without comparing to the Leverage case, at all?
You (Jessica) say, “I will be noting parts of Zoe’s post and comparing them to my own experience, which I hope helps to illuminate common patterns; it really helps to have an existing different account to prompt my memory of what happened.” But in that case, why not use her post as a starting point for organizing your own thoughts, but then write something about MIRI/CFAR that stands on its own terms?
. . .
To answer my own question...
My guess is that you adopted this essay structure because you want to argue that the things that happened at Leverage were not a one-off random thing, they were structurally (not just superficially) similar to dynamics at MIRI / CFAR. That is, there is a common cause in of similar symptoms, between those two cases.
If so, my impression is that this essay is going too fast, by introducing a bunch of new interpretation-laden data, and fitting that data into a grand theory of similarity between Leverage and MIRI all at once. Just clarifying the facts about what happened is a different (hard) goal than describing the general dynamics underlying those events. I think we’ll make more progress if we do the first, well, before moving on to the second.
In effect, because the data is presented as part of some larger theory, I have to do extra cognitive work to evaluate the data on its own terms, instead of slipping into the frame of evaluating whether the larger theory is true or false, or whether my affect towards MIRI should be the same as my affect toward Leverage, or something. It made it harder instead of easier for me to step out of the frame of blame and “who was most bad?”.
This feels especially salient because a number of the specific criticisms, in my opinion, don’t hold up to scrutiny, but this is obscured by the comparison to Leverage.
Like for any cultural characteristic X, there will be healthy and unhealthy versions. For instance, there are clearly good healthy versions of “having a culture of self improvement and debugging”, and also versions that are harmful.
For each point Zoe contends that (at least some parts of Leverage) had a destructive version, and you point out that there was a similar thing at MIRI/CFAR. And for many (but not all) of those points, 1) I agree that there was a similar dynamic at MIRI/CFAR, and also 2) I think that the MIRI CFAR version was much less harmful than what Zoe describes.
For instance,
Zoe is making the claim that (at least some parts of) Leverage had an unhealthy and destructive culture of debugging. You, Jessica, make the claim that CFAR had a similar culture of debugging, and that this is similarly bad. My current informed impression is that CFAR’s self improvement culture both had some toxic elements and is/was also an order of magnitude better than what Zoe describes.
Assuming that for a moment that my assessment about CFAR is true (of course, it might not be), your comparing debugging at CFAR to debugging at Leverage is confusing to the group cognition, because they have been implicitly lumped together.
Now, more people’s estimation of CFAR’s debugging culture will rise or fall with their estimation of Leverage’s debugging culture. And recognizing this, consciously or unconsciously, people are now incentivized to bias their estimation of one or of the other (because they want to defend CFAR, or defend Leverage, or attack CFAR, or attack Leverage).
I’m under this weird pressure, because if I state “Anna debugging with me while I worked at CFAR might seems bad, but it was actually mostly innocuous” is kind of awkward, because this seems to be implying that that what happened at Leverage was also not so bad.
And on the flip side, I’ll feel more cagey about talking about the toxic elements of CFAR’s debugging culture, because in context, that seems to be implying that it was as bad as Zoe’s account of Leverage.
“Debugging culture” is just one example. For many of these points, I think further investigation might show that the thing that happened at one org was meaningfully different from the thing that happened at the other org, in which case, bucketing them together from the getgo seems counterproductive to me.
Drawing the parallels between MIRI/CFAR and Leverage, point by point, makes it awkward to consider each org’s pathologies on it’s own terms. It makes it seem like if one was bad, then the other was probably bad too, even though it is at least possible that one org had mostly healthy versions of some cultural elements and the other had mostly unhealthy versions of similar elements, or (even more likely) they each had a different mix of pathologies.
I contend that if the goal is to get clear on the facts, we want to do the opposite thing: we want to, as much as possible, consider the details of the cases independently, attempting to do original seeing, so that we can get a good grasp of what happened in each situation.
And only after we’ve clarified what happened might we want to go back and see if there are common dynamics in play.
Ok. After thinking further and talking about it with others, I’ve changed my mind about the opinion that I expressed in this comment, for two reasons.
1) I think there is some pressure to scapegoat Leverage, by which I mean specifically, “write off Leverage as reprehensible, treat it as ‘an org that we all know is bad’, and move on, while feeling good about our selves for not being bad they way that they were”.
Pointing out some ways that MIRI or CFAR are similar to Leverage disrupts that process. Anyone who both wants to scapegoat Leverage and also likes MIRI has to contend with some amount of cognitive dissonance. (A person might productively resolve this cognitive dissonance by recognizing what I contend are real disanalogies between the two cases, but they do have to come to terms with it at all.)
If you mostly want to scapegoat, this is annoying, but I think we should be making it harder, not easier, to scapegoat in this way.
2) My current personal opinion is that the worst things that happened at MIRI or CFAR are not in the same league as what was describes as happening in (at least some parts of) Leverage in Zoe’s post, both in terms of the deliberateness of the bad dynamics and the magnitude the harm they caused.
I think that talking about MIRI or CFAR is mostly a distraction from understanding what happened at Leverage, and what things anyone here should do next. However, there are some similarities between Leverage on the one hand and CFAR or MIRI on the other, and Jessica had some data about the latter which might be relevant to people’s view about Leverage.
Basically, there’s an epistemic processing happening in these comments and on general principles, it is better for people to share info that they think is relevant, so that the epistemic process has the option of disregarding it or not.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I think that both of these views are incorrect simplifications. But I think that the second story is less accurate than the first, and so I think it is a cost if Jessica’s post promotes the second view. I have some annoyance about that.
However, I think that we mostly shouldn’t be in the business of trying to cater to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I still wish that this post had been written differently in a number of ways (such as emphasizing more strongly that in Jessica’s opinion management in corporate America is worse than MIRI or Leverage), but I acknowledging that writing such a post is hard.
I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
Blame Games
Can Crimes Be Discussed Literally?
Judgement, Punishment, and Information-Supression Fields
I appreciate this comment, especially that you noticed the giant upfront paragraph that’s relevant to the discussion :)
One note on reputational risk: I think I took reasonable efforts to reduce it, by emailing a draft to people including Anna Salamon beforehand. Anna Salamon added Matt Graves (Vaniver) to the thread, and they both said they’d be happy with me posting after editing (Matt Graves had a couple specific criticisms of the post). I only posted this on LW, not on my blog or Medium. I didn’t promote it on Twitter except to retweet someone who was already tweeting about it. I don’t think such reputation risk reduction on my part was morally obligatory (it would be really problematic to require people complaining about X organization to get approval from someone working at that organization), just possibly helpful anyway.
Spending more than this amount of effort managing reputation risks would seriously risk important information not getting published at all, and too little of that info being published would doom the overall ambitious world-saving project by denying it relevant knowledge about itself. I’m not saying I acted optimally, just, I don’t see the people complaining about this making a better tradeoff in their own actions or advising specific policies that would improve the tradeoff.
I think that’s literally true, but also they way you wrote this sentence implies that that is unusual or uncommon.
I think that’s backwards. If a person was intentionally and deliberately motivated to scapegoat some other person or group, it is an effective rhetorical move to say “I’m not trying to punish them, I just want to talk freely about some harms.”
By pretending that you’re not attacking the target, you protect yourself somewhat from counter attack. Now you can cause reputational damage, and if people try to punish you for doing that, you can retreat to the Motte of “but I was just trying to talk about what’s going on. I specifically said not to punish any one!”
This also seems to strong too me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.
I retracted this comment, because reading all of my comments here, a few years later, I feel much more compelled by my original take than by this addition.
I think the addition points out real dynamics, but that those dynamics don’t take precedence over the dynamics that I expressed the first place. Those seem higher priority to me.
This works as a general warning against awareness of hypotheses that are close to but distinct from the prevailing belief. The goal should be to make this feasible, not to become proficient in noticing the warning signs and keeping away from this.
I think the feeling that this kind of argument is fair is a kind of motivated cognition that’s motivated by credence. That is, if a cognitive move (argument, narrative, hypothesis) puts forward something false, there is a temptation to decry it for reasons that would prove too much, that would apply to good cognitive moves just as well if considered in their context, which credence-motivated cognition won’t be doing.