Attempt to get shared models on “Variations in Responses”:
Quote from another comment by Mr. Davis Kingsley:
My sense is that dynamics like those you describe were mostly not present at CFAR, or insofar as they were present weren’t really the main thing.
I bid:
This counts as counter-evidence, but it’s unfortunately not very strong counter-evidence. Or at least it’s weaker than one might naively believe.
Why?
It is true of many groups that even while most of a group’s activities or even the main point of a group’s activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay.
It might even be only ONE person who takes all the abuse.
I think this dynamic is so fucked that most people don’t want to admit that it’s a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I’m not making that claim. I do get a ‘vibe’ from Zoe’s post that it’s what Leverage 1.0 might have been like. /grimace)
An aside: I am somewhat angry with people’s responses to jessicata in this comment section… (esp Viliam and somewhat Eli Tyre) I guess because I am smelling a dynamic where jessicata might be drawing some “blame-y” energy towards her (unintentionally), and people are playing into it. But uhhh I notice that my own “drama triangle rescuer” is playing into my reaction soo. D: Not sure what to do.
Anyway, to properly investigate matters like this, it’s pretty important to be willing to engage the hidden / silent / unpopular minority. Ideally, without taking on all their baggage (since they prob have some baggage).
If we’re not ready to engage that minority (while skillfully discerning what’s delusion and what isn’t), then we shouldn’t force it. imo.
Michael Vassar, it seems, is … I notice that we collectively are having a hard time thinking about him clearly. At different points he is being painted a cartoon villain but also there’s a weird undertone from his defenders of … really wanting to downplay his involvement? That smells funky. Like, sure, maybe he wasn’t very directly involved and stuff but… do you NOT consider him to be a thought leader for your group? Why are they referred to as Vassarites? Are you SURE you’re not doing something slippery inside your minds? It kind of feels like something is mentally walled off in your own heads, … :l :l
On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don’t sound like you really know him at all. And while it’s nice to have a bunch of one-time impressions of a guy, this is not a great foundation for judging his character either. And other people seem a little too eager to use unfounded rumors as evidence.
I will admit that I don’t particularly trust psychiatric institutions or mainstream narratives about psychology, and so I have some bias against Scott Alexander’s take (with no ill will towards the man himself).
I also mentioned in a different comment that I suspect there’s some ‘poison’ inside Vassarite narratives about narrative control, society, institutions, etc. But I feel hopeful about the potential for a different framing that doesn’t have the poison in it.
…
I am advocating for a lot more discernment and self-awareness in this discussion. And the things Anna mentioned in another comment, like caring, compassion, and curiosity.
Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.
The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.
Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.
Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here—the very post that got upvoted and is displayed publicly to insiders and outsiders alike. I do strongly object against making this kind of false equivalence.
it’s pretty important to be willing to engage the hidden / silent / unpopular minority
The hidden / silent / unpopular minority members can post their criticism of MIRI/CFAR right here, and most likely it will get upvoted. No legal threats whatsoever. No debugging sessions with their supervisor. Yes, some people will probably disagree with them, and those will get upvoted, too.
You know, this reminds me of comparison between dictatorships and democracies. In a dictatorship, the leader officially has a 100% popular support. In a democracy, maybe 50% of people say that the country sucks and the leadership is corrupt. Should we take these numbers at the face value? Should we even discount them both to the same degree and say “if the dictatorship claims to have 100% popular support, but in fact only 20% of people are happy with the situation, then if the democracy claims to have 50% popular support, we should apply the same ratio and conclude that only 10% of people are happy?”.
Because it seems to me that you are making a similar claim here. We know that some people are afraid to talk publicly about their experience in Leverage. You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR. I think this is unlikely. I assume that if someone is unhappy about MIRI/CFAR doing something, there is probably a blog post about it somewhere (not necessarily on LessWrong) already.
On the other side of it, why do people seem TOO DETERMINED to turn [Michael Vassar] into a scapegoat?
Do you disagree with specific actions being attributed to Michael? Do you disagree with the conclusion that it is a good reason to avoid him and also tell all your friends to avoid him?
Considering this, do you find it credible that the dynamics of both groups is actually very similar?
I’m a little unsure where this is coming from. I never made explicitly this comparison.
That said, I was at a CFAR staff reunion recently where one of the talks was on ‘narrative control’ and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the ‘poison’—a thing I’ve described in other comments.)
I’d like to be able to learn about these and other such mechanisms, and this is an inquiry I’m personally interested in.
I do strongly object against making this kind of false equivalence.
I mostly trust that you, myself, and most readers can discern the differences that you’re worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I’m open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situation and the CFAR/MIRI thing? Most of the comments so far seem unconfused on this afaict.
You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR.
Sorry, I think I wasn’t being clear. I am not assuming this.
My claim is that comments similar to the one Davis is making don’t serve as a general strong counter-argument for situations where there might be a hidden minority.
I am not (right now) claiming CFAR/MIRI has such a hidden minority. Just that the kind of evidence Davis was trying to provide doesn’t strike me as very STRONG evidence, given the nature of the dynamics of this type of thing.
Where the dynamics of this kind of thing can create polarized experiences, where a minority of people have a really BAD time, while most people do not notice or have it rise to the right level of conscious awareness. I am trying to add weight to Zoe’s section in her post on “variations in responses.” Even though Leverage was divided into subgroups and the workshops were chill all that, I don’t think the subgroup divisions are the only force behind why there’s a lot of variation in responses.
I think even without subgroups, this ‘class division’ thing might have turned up in Leverage. Because it’s actually not very hard to create a hidden minority, even in plain sight.
And y’know what, even though CFAR is certainly not as bad as Leverage, and I’m not trying to bucket the two together… I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’. They’re my friends, and I care about them. I do not think CFAR is to blame or anything (again, I’m uninterested in the blame game).
I hope it is fine for me to try to investigate the nature of these group dynamics. I don’t really buy that my comments are contributing to a wild conflation between Leverage and CFAR. If anything, I think investigating on this level will contribute to greater understanding of the underlying patterns at play.
The conflation between Leverage and CFAR is made by the article. Most explicitly here...
Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).
...and generally, the article goes like “Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too.” The entire article (except for the intro) is structured as a point-by-point comparison with Zoe’s article.
Most commenters don’t buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public… and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).
I hope it is fine for me to try to investigate the nature of these group dynamics.
Sure, go ahead!
I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’.
I would be happy to hear about their experience. Generally, the upvotes here are pretty much guaranteed. Specific accusations can be addressed—either by “actually, you got this part wrong” or by “oops, that was indeed a mistake, and here is what we are going to do to prevent this from happening again”.
(And sometimes by plain refusal, like “no, if you believe that you are possessed by demons and need to exorcise them, the rationalist community will not play along; but we can recommend a good therapist”. Similarly, if you like religion, superstition, magic, or drugs, please keep them at home, do not bring them to community activities, especially not in a way that might look like the community endorses this.)
Dear silent minority, if you are reading this, what can we do to allow you to speak about your experience? If you need anonymity, you can create a throwaway account. If you need a debate where LessWrong moderators cannot interfere, one of you can create an independent forum and advertize it here. If you are afraid of some, dunno, legal action or whatever, could you please post a proposal of a public commitment that MIRI/CFAR should take to allow you to speak freely?
(I might regret giving this advice but heck, just contact David Gerard from RationalWiki, he will be more than happy to hear and publish any dirt you have on MIRI/CFAR or anyone in the rationalist community.)
Any other proposals, what specifically could MIRI/CFAR do, or stop doing, to allow the silent minority to talk about their difficult and traumatic experience with the rationalist community and its organizations?
But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar.
I seem less concerned about this than you do. I don’t see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that.
I would be happy to hear about their experience.
I’m frustrated by the way you are engaging in this… there’s a strangely blithe tone, and I am reading it as somewhat mean?
If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want… but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don’t know you, then I’m fine with whatever happens. Well wishes.
This comment mostly makes good points in their own right, but I feel it’s highly misleading to imply that those points are at all relevant to what Unreal’s comment discussed. A policy doesn’t need to be crucial to be good. A working doesn’t need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.
On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don’t sound like you really know him at all.
A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife “so, he smoked, right?” and his wife would say “nope” and then they would look unsettled. He attributed it to something like “people want to feel like all health issues are deserved, and so their being good / in control will protect them.” A world where people sometimes get lung cancer without having pressed the “give me lung cancer” button is scarier than the world where the only way to get it is by pressing the button.
I think there’s something here where people are projecting all of the potential harm onto Michael, in a way that’s sort of fair from a ‘driving their actions’ perspective (if they’re worried about the effects of talking to him, maybe they shouldn’t talk to him), but which really isn’t owning the degree to which the effects they’re worried about are caused by their instability or the them-Michael dynamic.
[A thing Anna and I discussed recently is, roughly, the tension between “telling the truth” and “not destabilizing the current regime”; I think it’s easy to see there as being a core disagreement about whether or not it’s better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the “tell the truth, even if everything falls apart” principle.]
+1 to your example and esp “isn’t owning the degree to which the effects they’re worried about are caused by their instability or the them-Michael dynamic.”
I also want to leave open the hypothesis that this thing isn’t a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided.
Sorry if my tone about “something slippery” was way too confronting. I have simultaneously a lot of compassion and a lot of faith in people’s ability to ‘handle difficult truths’ or something like that. But that nuanced tone is hard to get across on the internet.
If you feel negatively impacted by my comment here, you are welcome to challenge me or confront me about it here or elsewhere.
Attempt to get shared models on “Variations in Responses”:
Quote from another comment by Mr. Davis Kingsley:
I bid:
This counts as counter-evidence, but it’s unfortunately not very strong counter-evidence. Or at least it’s weaker than one might naively believe.
Why?
It is true of many groups that even while most of a group’s activities or even the main point of a group’s activities might be wholesome, above board, above water, beneficial, etc., it is possible that this is still secretly enabling the abuse of a silent or hidden minority. The minority that, in the end, is going to be easiest to dismiss, ridicule, or downplay.
It might even be only ONE person who takes all the abuse.
I think this dynamic is so fucked that most people don’t want to admit that it’s a real thing. How can a community or group that is mostly wholesome and good and happy be hiding atrocious skeletons in their closet? (Not that this is true of CFAR or MIRI, I’m not making that claim. I do get a ‘vibe’ from Zoe’s post that it’s what Leverage 1.0 might have been like. /grimace)
An aside: I am somewhat angry with people’s responses to jessicata in this comment section… (esp Viliam and somewhat Eli Tyre) I guess because I am smelling a dynamic where jessicata might be drawing some “blame-y” energy towards her (unintentionally), and people are playing into it. But uhhh I notice that my own “drama triangle rescuer” is playing into my reaction soo. D: Not sure what to do.
Anyway, to properly investigate matters like this, it’s pretty important to be willing to engage the hidden / silent / unpopular minority. Ideally, without taking on all their baggage (since they prob have some baggage).
If we’re not ready to engage that minority (while skillfully discerning what’s delusion and what isn’t), then we shouldn’t force it. imo.
Michael Vassar, it seems, is … I notice that we collectively are having a hard time thinking about him clearly. At different points he is being painted a cartoon villain but also there’s a weird undertone from his defenders of … really wanting to downplay his involvement? That smells funky. Like, sure, maybe he wasn’t very directly involved and stuff but… do you NOT consider him to be a thought leader for your group? Why are they referred to as Vassarites? Are you SURE you’re not doing something slippery inside your minds? It kind of feels like something is mentally walled off in your own heads, … :l :l
On the other side of it, why do people seem TOO DETERMINED to turn him into a scapegoat? Most of you don’t sound like you really know him at all. And while it’s nice to have a bunch of one-time impressions of a guy, this is not a great foundation for judging his character either. And other people seem a little too eager to use unfounded rumors as evidence.
I will admit that I don’t particularly trust psychiatric institutions or mainstream narratives about psychology, and so I have some bias against Scott Alexander’s take (with no ill will towards the man himself).
I also mentioned in a different comment that I suspect there’s some ‘poison’ inside Vassarite narratives about narrative control, society, institutions, etc. But I feel hopeful about the potential for a different framing that doesn’t have the poison in it.
…
I am advocating for a lot more discernment and self-awareness in this discussion. And the things Anna mentioned in another comment, like caring, compassion, and curiosity.
Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.
The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.
Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.
Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here—the very post that got upvoted and is displayed publicly to insiders and outsiders alike. I do strongly object against making this kind of false equivalence.
The hidden / silent / unpopular minority members can post their criticism of MIRI/CFAR right here, and most likely it will get upvoted. No legal threats whatsoever. No debugging sessions with their supervisor. Yes, some people will probably disagree with them, and those will get upvoted, too.
You know, this reminds me of comparison between dictatorships and democracies. In a dictatorship, the leader officially has a 100% popular support. In a democracy, maybe 50% of people say that the country sucks and the leadership is corrupt. Should we take these numbers at the face value? Should we even discount them both to the same degree and say “if the dictatorship claims to have 100% popular support, but in fact only 20% of people are happy with the situation, then if the democracy claims to have 50% popular support, we should apply the same ratio and conclude that only 10% of people are happy?”.
Because it seems to me that you are making a similar claim here. We know that some people are afraid to talk publicly about their experience in Leverage. You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR. I think this is unlikely. I assume that if someone is unhappy about MIRI/CFAR doing something, there is probably a blog post about it somewhere (not necessarily on LessWrong) already.
Do you disagree with specific actions being attributed to Michael? Do you disagree with the conclusion that it is a good reason to avoid him and also tell all your friends to avoid him?
I’m a little unsure where this is coming from. I never made explicitly this comparison.
That said, I was at a CFAR staff reunion recently where one of the talks was on ‘narrative control’ and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the ‘poison’—a thing I’ve described in other comments.)
I’d like to be able to learn about these and other such mechanisms, and this is an inquiry I’m personally interested in.
I mostly trust that you, myself, and most readers can discern the differences that you’re worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I’m open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situation and the CFAR/MIRI thing? Most of the comments so far seem unconfused on this afaict.
Sorry, I think I wasn’t being clear. I am not assuming this.
My claim is that comments similar to the one Davis is making don’t serve as a general strong counter-argument for situations where there might be a hidden minority.
I am not (right now) claiming CFAR/MIRI has such a hidden minority. Just that the kind of evidence Davis was trying to provide doesn’t strike me as very STRONG evidence, given the nature of the dynamics of this type of thing.
Where the dynamics of this kind of thing can create polarized experiences, where a minority of people have a really BAD time, while most people do not notice or have it rise to the right level of conscious awareness. I am trying to add weight to Zoe’s section in her post on “variations in responses.” Even though Leverage was divided into subgroups and the workshops were chill all that, I don’t think the subgroup divisions are the only force behind why there’s a lot of variation in responses.
I think even without subgroups, this ‘class division’ thing might have turned up in Leverage. Because it’s actually not very hard to create a hidden minority, even in plain sight.
And y’know what, even though CFAR is certainly not as bad as Leverage, and I’m not trying to bucket the two together… I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’. They’re my friends, and I care about them. I do not think CFAR is to blame or anything (again, I’m uninterested in the blame game).
I hope it is fine for me to try to investigate the nature of these group dynamics. I don’t really buy that my comments are contributing to a wild conflation between Leverage and CFAR. If anything, I think investigating on this level will contribute to greater understanding of the underlying patterns at play.
The conflation between Leverage and CFAR is made by the article. Most explicitly here...
...and generally, the article goes like “Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too.” The entire article (except for the intro) is structured as a point-by-point comparison with Zoe’s article.
Most commenters don’t buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public… and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).
Sure, go ahead!
I would be happy to hear about their experience. Generally, the upvotes here are pretty much guaranteed. Specific accusations can be addressed—either by “actually, you got this part wrong” or by “oops, that was indeed a mistake, and here is what we are going to do to prevent this from happening again”.
(And sometimes by plain refusal, like “no, if you believe that you are possessed by demons and need to exorcise them, the rationalist community will not play along; but we can recommend a good therapist”. Similarly, if you like religion, superstition, magic, or drugs, please keep them at home, do not bring them to community activities, especially not in a way that might look like the community endorses this.)
Dear silent minority, if you are reading this, what can we do to allow you to speak about your experience? If you need anonymity, you can create a throwaway account. If you need a debate where LessWrong moderators cannot interfere, one of you can create an independent forum and advertize it here. If you are afraid of some, dunno, legal action or whatever, could you please post a proposal of a public commitment that MIRI/CFAR should take to allow you to speak freely?
(I might regret giving this advice but heck, just contact David Gerard from RationalWiki, he will be more than happy to hear and publish any dirt you have on MIRI/CFAR or anyone in the rationalist community.)
Any other proposals, what specifically could MIRI/CFAR do, or stop doing, to allow the silent minority to talk about their difficult and traumatic experience with the rationalist community and its organizations?
I seem less concerned about this than you do. I don’t see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that.
I’m frustrated by the way you are engaging in this… there’s a strangely blithe tone, and I am reading it as somewhat mean?
If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want… but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don’t know you, then I’m fine with whatever happens. Well wishes.
This comment mostly makes good points in their own right, but I feel it’s highly misleading to imply that those points are at all relevant to what Unreal’s comment discussed. A policy doesn’t need to be crucial to be good. A working doesn’t need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.
A blogger I read sometimes talks about his experience with lung cancer (decades ago), where people would ask his wife “so, he smoked, right?” and his wife would say “nope” and then they would look unsettled. He attributed it to something like “people want to feel like all health issues are deserved, and so their being good / in control will protect them.” A world where people sometimes get lung cancer without having pressed the “give me lung cancer” button is scarier than the world where the only way to get it is by pressing the button.
I think there’s something here where people are projecting all of the potential harm onto Michael, in a way that’s sort of fair from a ‘driving their actions’ perspective (if they’re worried about the effects of talking to him, maybe they shouldn’t talk to him), but which really isn’t owning the degree to which the effects they’re worried about are caused by their instability or the them-Michael dynamic.
[A thing Anna and I discussed recently is, roughly, the tension between “telling the truth” and “not destabilizing the current regime”; I think it’s easy to see there as being a core disagreement about whether or not it’s better to see the way in which the organizations surrounding you are ___, and Michael is being thought of as some sort of pole for the “tell the truth, even if everything falls apart” principle.]
+1 to your example and esp “isn’t owning the degree to which the effects they’re worried about are caused by their instability or the them-Michael dynamic.”
I also want to leave open the hypothesis that this thing isn’t a one-sided dynamic, and Michael and/or his group is unintentionally contributing to it. Whereas the lung cancer example seems almost entirely one-sided.
Sorry if my tone about “something slippery” was way too confronting. I have simultaneously a lot of compassion and a lot of faith in people’s ability to ‘handle difficult truths’ or something like that. But that nuanced tone is hard to get across on the internet.
If you feel negatively impacted by my comment here, you are welcome to challenge me or confront me about it here or elsewhere.