Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.
The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.
Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.
Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here—the very post that got upvoted and is displayed publicly to insiders and outsiders alike. I do strongly object against making this kind of false equivalence.
it’s pretty important to be willing to engage the hidden / silent / unpopular minority
The hidden / silent / unpopular minority members can post their criticism of MIRI/CFAR right here, and most likely it will get upvoted. No legal threats whatsoever. No debugging sessions with their supervisor. Yes, some people will probably disagree with them, and those will get upvoted, too.
You know, this reminds me of comparison between dictatorships and democracies. In a dictatorship, the leader officially has a 100% popular support. In a democracy, maybe 50% of people say that the country sucks and the leadership is corrupt. Should we take these numbers at the face value? Should we even discount them both to the same degree and say “if the dictatorship claims to have 100% popular support, but in fact only 20% of people are happy with the situation, then if the democracy claims to have 50% popular support, we should apply the same ratio and conclude that only 10% of people are happy?”.
Because it seems to me that you are making a similar claim here. We know that some people are afraid to talk publicly about their experience in Leverage. You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR. I think this is unlikely. I assume that if someone is unhappy about MIRI/CFAR doing something, there is probably a blog post about it somewhere (not necessarily on LessWrong) already.
On the other side of it, why do people seem TOO DETERMINED to turn [Michael Vassar] into a scapegoat?
Do you disagree with specific actions being attributed to Michael? Do you disagree with the conclusion that it is a good reason to avoid him and also tell all your friends to avoid him?
Considering this, do you find it credible that the dynamics of both groups is actually very similar?
I’m a little unsure where this is coming from. I never made explicitly this comparison.
That said, I was at a CFAR staff reunion recently where one of the talks was on ‘narrative control’ and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the ‘poison’—a thing I’ve described in other comments.)
I’d like to be able to learn about these and other such mechanisms, and this is an inquiry I’m personally interested in.
I do strongly object against making this kind of false equivalence.
I mostly trust that you, myself, and most readers can discern the differences that you’re worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I’m open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situation and the CFAR/MIRI thing? Most of the comments so far seem unconfused on this afaict.
You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR.
Sorry, I think I wasn’t being clear. I am not assuming this.
My claim is that comments similar to the one Davis is making don’t serve as a general strong counter-argument for situations where there might be a hidden minority.
I am not (right now) claiming CFAR/MIRI has such a hidden minority. Just that the kind of evidence Davis was trying to provide doesn’t strike me as very STRONG evidence, given the nature of the dynamics of this type of thing.
Where the dynamics of this kind of thing can create polarized experiences, where a minority of people have a really BAD time, while most people do not notice or have it rise to the right level of conscious awareness. I am trying to add weight to Zoe’s section in her post on “variations in responses.” Even though Leverage was divided into subgroups and the workshops were chill all that, I don’t think the subgroup divisions are the only force behind why there’s a lot of variation in responses.
I think even without subgroups, this ‘class division’ thing might have turned up in Leverage. Because it’s actually not very hard to create a hidden minority, even in plain sight.
And y’know what, even though CFAR is certainly not as bad as Leverage, and I’m not trying to bucket the two together… I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’. They’re my friends, and I care about them. I do not think CFAR is to blame or anything (again, I’m uninterested in the blame game).
I hope it is fine for me to try to investigate the nature of these group dynamics. I don’t really buy that my comments are contributing to a wild conflation between Leverage and CFAR. If anything, I think investigating on this level will contribute to greater understanding of the underlying patterns at play.
The conflation between Leverage and CFAR is made by the article. Most explicitly here...
Most of what was considered bad about the events at Leverage Research also happened around MIRI/CFAR, around the same time period (2017-2019).
...and generally, the article goes like “Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too.” The entire article (except for the intro) is structured as a point-by-point comparison with Zoe’s article.
Most commenters don’t buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public… and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).
I hope it is fine for me to try to investigate the nature of these group dynamics.
Sure, go ahead!
I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’.
I would be happy to hear about their experience. Generally, the upvotes here are pretty much guaranteed. Specific accusations can be addressed—either by “actually, you got this part wrong” or by “oops, that was indeed a mistake, and here is what we are going to do to prevent this from happening again”.
(And sometimes by plain refusal, like “no, if you believe that you are possessed by demons and need to exorcise them, the rationalist community will not play along; but we can recommend a good therapist”. Similarly, if you like religion, superstition, magic, or drugs, please keep them at home, do not bring them to community activities, especially not in a way that might look like the community endorses this.)
Dear silent minority, if you are reading this, what can we do to allow you to speak about your experience? If you need anonymity, you can create a throwaway account. If you need a debate where LessWrong moderators cannot interfere, one of you can create an independent forum and advertize it here. If you are afraid of some, dunno, legal action or whatever, could you please post a proposal of a public commitment that MIRI/CFAR should take to allow you to speak freely?
(I might regret giving this advice but heck, just contact David Gerard from RationalWiki, he will be more than happy to hear and publish any dirt you have on MIRI/CFAR or anyone in the rationalist community.)
Any other proposals, what specifically could MIRI/CFAR do, or stop doing, to allow the silent minority to talk about their difficult and traumatic experience with the rationalist community and its organizations?
But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar.
I seem less concerned about this than you do. I don’t see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that.
I would be happy to hear about their experience.
I’m frustrated by the way you are engaging in this… there’s a strangely blithe tone, and I am reading it as somewhat mean?
If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want… but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don’t know you, then I’m fine with whatever happens. Well wishes.
This comment mostly makes good points in their own right, but I feel it’s highly misleading to imply that those points are at all relevant to what Unreal’s comment discussed. A policy doesn’t need to be crucial to be good. A working doesn’t need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.
Please allow me to point out one difference between the Rationalist community and Leverage that is so obvious and huge that many people possibly have missed it.
The Rationalist community has a website called LessWrong, where people critical of the community can publicly voice their complaints and discuss them. For example, you can write an article accusing their key organizations of being abusive, and it will get upvoted and displayed on the front page, so that everyone can add their part of the story. The worst thing the high-status members of the community will do to you is publicly post their disagreement in a comment. In turn, you can disagree with them; and you will probably get upvoted, too.
Leverage Research makes you sign an NDA, preventing you from talking about your experience there. Most Leverage ex-members are in fact afraid to discuss their experience. Leverage even tries (unsuccessfully) to suppress the discussion of Leverage on LessWrong.
Considering this, do you find it credible that the dynamics of both groups is actually very similar? Because that seems to be the narrative of the post we are discussing here—the very post that got upvoted and is displayed publicly to insiders and outsiders alike. I do strongly object against making this kind of false equivalence.
The hidden / silent / unpopular minority members can post their criticism of MIRI/CFAR right here, and most likely it will get upvoted. No legal threats whatsoever. No debugging sessions with their supervisor. Yes, some people will probably disagree with them, and those will get upvoted, too.
You know, this reminds me of comparison between dictatorships and democracies. In a dictatorship, the leader officially has a 100% popular support. In a democracy, maybe 50% of people say that the country sucks and the leadership is corrupt. Should we take these numbers at the face value? Should we even discount them both to the same degree and say “if the dictatorship claims to have 100% popular support, but in fact only 20% of people are happy with the situation, then if the democracy claims to have 50% popular support, we should apply the same ratio and conclude that only 10% of people are happy?”.
Because it seems to me that you are making a similar claim here. We know that some people are afraid to talk publicly about their experience in Leverage. You seem to assume that there must be a similar group of people afraid to talk publicly about their experience in MIRI/CFAR. I think this is unlikely. I assume that if someone is unhappy about MIRI/CFAR doing something, there is probably a blog post about it somewhere (not necessarily on LessWrong) already.
Do you disagree with specific actions being attributed to Michael? Do you disagree with the conclusion that it is a good reason to avoid him and also tell all your friends to avoid him?
I’m a little unsure where this is coming from. I never made explicitly this comparison.
That said, I was at a CFAR staff reunion recently where one of the talks was on ‘narrative control’ and we were certainly interested in the question about institutions and how they seem to employ mechanisms for (subtly or not) keeping people from looking at certain things or promoting particular thoughts or ideas. (I am not the biggest fan of the framing, because it feels like it has the ‘poison’—a thing I’ve described in other comments.)
I’d like to be able to learn about these and other such mechanisms, and this is an inquiry I’m personally interested in.
I mostly trust that you, myself, and most readers can discern the differences that you’re worried about conflating. But if you genuinely believe that a false equivalence might rise to prominence in our collective sense-making, I’m open to the possibility. If you check your expectations, do you expect that people will get confused about the gap between the Leverage situation and the CFAR/MIRI thing? Most of the comments so far seem unconfused on this afaict.
Sorry, I think I wasn’t being clear. I am not assuming this.
My claim is that comments similar to the one Davis is making don’t serve as a general strong counter-argument for situations where there might be a hidden minority.
I am not (right now) claiming CFAR/MIRI has such a hidden minority. Just that the kind of evidence Davis was trying to provide doesn’t strike me as very STRONG evidence, given the nature of the dynamics of this type of thing.
Where the dynamics of this kind of thing can create polarized experiences, where a minority of people have a really BAD time, while most people do not notice or have it rise to the right level of conscious awareness. I am trying to add weight to Zoe’s section in her post on “variations in responses.” Even though Leverage was divided into subgroups and the workshops were chill all that, I don’t think the subgroup divisions are the only force behind why there’s a lot of variation in responses.
I think even without subgroups, this ‘class division’ thing might have turned up in Leverage. Because it’s actually not very hard to create a hidden minority, even in plain sight.
And y’know what, even though CFAR is certainly not as bad as Leverage, and I’m not trying to bucket the two together… I will put forth that a silent minority has existed at CFAR, in the past, and that their experience was difficult and pretty traumatic for them. And I have strong reasons to believe they’re still ‘not over it’. They’re my friends, and I care about them. I do not think CFAR is to blame or anything (again, I’m uninterested in the blame game).
I hope it is fine for me to try to investigate the nature of these group dynamics. I don’t really buy that my comments are contributing to a wild conflation between Leverage and CFAR. If anything, I think investigating on this level will contribute to greater understanding of the underlying patterns at play.
The conflation between Leverage and CFAR is made by the article. Most explicitly here...
...and generally, the article goes like “Zoe said that X happens in Leverage. A kinda similar thing happens in MIRI/CFAR, too.” The entire article (except for the intro) is structured as a point-by-point comparison with Zoe’s article.
Most commenters don’t buy it. But I imagine (perhaps incorrectly) that if a person unfamiliar with MIRI/CFAR and rationalist community in general would read the article, their impression would be that the two are pretty similar. This is why I consider it quite important to explain, very clearly, that they are not. This debate is public… and I expect it to be quote-mined (by RationalWiki and consequently Wikipedia).
Sure, go ahead!
I would be happy to hear about their experience. Generally, the upvotes here are pretty much guaranteed. Specific accusations can be addressed—either by “actually, you got this part wrong” or by “oops, that was indeed a mistake, and here is what we are going to do to prevent this from happening again”.
(And sometimes by plain refusal, like “no, if you believe that you are possessed by demons and need to exorcise them, the rationalist community will not play along; but we can recommend a good therapist”. Similarly, if you like religion, superstition, magic, or drugs, please keep them at home, do not bring them to community activities, especially not in a way that might look like the community endorses this.)
Dear silent minority, if you are reading this, what can we do to allow you to speak about your experience? If you need anonymity, you can create a throwaway account. If you need a debate where LessWrong moderators cannot interfere, one of you can create an independent forum and advertize it here. If you are afraid of some, dunno, legal action or whatever, could you please post a proposal of a public commitment that MIRI/CFAR should take to allow you to speak freely?
(I might regret giving this advice but heck, just contact David Gerard from RationalWiki, he will be more than happy to hear and publish any dirt you have on MIRI/CFAR or anyone in the rationalist community.)
Any other proposals, what specifically could MIRI/CFAR do, or stop doing, to allow the silent minority to talk about their difficult and traumatic experience with the rationalist community and its organizations?
I seem less concerned about this than you do. I don’t see the consequences of this being particularly bad, in expectation. It seems you believe it is important, and I hear that.
I’m frustrated by the way you are engaging in this… there’s a strangely blithe tone, and I am reading it as somewhat mean?
If you want to engage in a curious, non-judgy, and open conversation about the way this conversation is playing out, I could be up for that (in a different medium, maybe email or text or a phone call or something). Continuing on the object level like this is not working for me. You can DM me if you want… but obviously fine to ignore this also. If I know you IRL, it is a little more important to me, but if I don’t know you, then I’m fine with whatever happens. Well wishes.
This comment mostly makes good points in their own right, but I feel it’s highly misleading to imply that those points are at all relevant to what Unreal’s comment discussed. A policy doesn’t need to be crucial to be good. A working doesn’t need to be worse than terrible to get attention to its remaining flaws. Inaccuracy of a bug report should provoke a search for its better form, not nullify its salience.