I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.
There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don’t even notice that this happens, or maybe feel like it’s not their job to go reminding everyone “hey, don’t blindly trust everyone you meet here”.
I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not… but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.
To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously—from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.
Not sure what exactly to do about this, but perhaps the first step could be to write some warnings about this, and read them publicly at the beginning of every public event where new people come. Preferably with specific examples of things that happened in the past; like, not the exact name and place, but the pattern, like “hey, I have a startup that aims to improve the world, wanna code for me this app for free, I will totally donate something to some effective charity, pinky swear”.
I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)
One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.
I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.
This post by Nuno was partially meant as a test for this:
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.
[1] I don’t particularly blame them, consider the alternative.
[1] I don’t particularly blame them, consider the alternative.
I think the alternative is actually much better than silence!
For example I think the EA Hotel is great and that many “in the know” think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.
Simply put, if you are actually trying to make a good org, being silently blackballed by those “in the know” is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those “in the know” matter; they lead, and I think its better for everyone if that leadership happens in the light.
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have.
I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.
I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have (“we think your org isn’t that great, for these reasons”), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don’t have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)
I think the fact that we have so few grantmakers right now is a big bottleneck that I’m sure basically everyone would love to see improved. (The situation isn’t great for current grantmakers, who often have to work long hours). But “figuring out how to scale grantmaking” is a bit of a separate discussion.
Around making the information public specifically, that’s a whole different matter. Imagine the value proposition, “If you apply to this grant, and get turned down, we’ll write about why we don’t like it publically for everyone to see.” Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
(Note: I was a guest manager on the LTFF for a few months, earlier this year)
Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.
I imagine grantmakers would be skeptical about people who would say “yes” to an optional form. Like, they say they’re okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.
However, some of our community seems unusually reasonable, so perhaps there’s some way to make it viable.
To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously—from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.
—-
For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.
There are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense.
Intense communities also generally focus on something where otherwise there’s not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn’t happen with less cognitive diversity.
I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities.
I’m not making a normative claim about the value of being “moral” and/or “intense”, just saying that I’d expect moral/intense groups to have some of the same characteristics and challenges.
while the actual representatives of EA/rationalist community probably don’t even notice that this happens
I think it matters a lot whether this is true, and there is widely known evidence that it isn’t true. For example Brent Dill and (if you are willing to believe victims) Robert Lecnik.
Your post is well said and I am also very worried about EA/rat spaces as a fruitful space for predatory actors.
Which thing are you claiming here? I am a bit confused by the double negative (you’re saying there’s “widely known evidence that it isn’t true that representatives don’t even notice when abuse happens”, I think; might you rephrase?).
I’ve made stupid and harmful errors at various time, and e.g. should’ve been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been “bad at consent” as he put it. I don’t wish to be and don’t think I should be one of the main people trying to safeguard victims’ rights; I don’t think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there’s much agreement on what kinds of ‘safeguarding’ are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)
Nonetheless, I don’t and didn’t view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay’s account of the meeting with me are inaccurate (differ from what I’m really pretty sure I remember, and also from what Robert and his husband said when I asked them for their separate recollections). (From your perspective, I could be lying, in coordination with Robert and his husband who also remember what I remember. But I’ll say my piece anyhow. And I don’t have much of a reputation for lying.) If you want details on how the me/Robert/Jay interaction went as far as I can remember, they’re discussed in a closed FB group with ~130 members that you might be able to join if you ask the mods; I can also paste them in here I guess, although it’s rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I’ll ask his thoughts/preferences first, or I’m interested in others’ thoughts on how the etiquette of this sort of thing ought to go. Or could PM them or something, but then you skip the “group getting to discuss it” part. We at CFAR brought Julia Wise into the discussion last time (not after the original me/Robert/Jay conversation, but after Jay’s later allegations plus Somni’s made it apparent that there was something more serious here), because we figured she was trustworthy and had a decent track record at spotting this kind of thing.
I’m claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing. I think that you are pretty familiar with this view.
I don’t wish to be and don’t think I should be one of the main people trying to safeguard victims’ rights; I don’t think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds,
I want to point out what is in my mind a clear difference between taking a major role as a safeguard, and failing people who trust you and when the accused confesses to you. You can dispute whether that happened but it’s not as though I am asking you to be held liable for all harms.
I can also paste them in here I guess, although it’s rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I’ll ask his thoughts/preferences first
If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way). If you don’t think so then you can just say so. Basically, it seems like your willingness to publish this stuff should mostly just depend on how harmful you think this person was.
I’m personally not aware of anything you did with respect to Robert that demonstrates intolerance for serious harms. Allowing somebody to continue to be an organizer for something after they confess to rape qualifies as tolerance of serious harms to me.
Of course my comment here seems litigious—I am not really trying to litigate.
In very plain terms: It has been alleged that CFAR leadership knew that Brent and Robert were committing serious harms and at the very least tolerated it. I take these allegations seriously. Anyone who takes these allegations seriously would obviously be troubled by it being taken for granted that community leaders do not even notice harms taking place.
“It has been alleged” strikes me as not meeting the bar that LW should strive to clear, when dealing with such high stakes, with this much uncertainty.
Allegations come with an alleger attached. If that alleger is someone else (i.e. if you don’t want to tie your own credibility to their account) then it’s good to just … link straight to the source.
If that alleger is you (including if you’re repeating someone else’s allegations because you found them credible enough that you’re adopting them, and repeating them on your own authority), you should be able to state them directly and concretely.
“It has been alleged” is a vague, passive-voice, miasmatic phrase that is really difficult to work with, or think clearly around.
It also implies that these allegations have not been, or cannot be, settled, as questions of fact, or at least probability. It perpetuates a sort of un-pin-downable quality, because as long as the allegations are mist and fog, just floating around absent anyone who’s taking ownership of them, they can’t be conclusively settled or affirmed, and can be repeated forever.
I think it’s pretty bad to lean into a dynamic like that.
In that very statement, you can also find CFAR’s mea culpas re: places where CFAR feels it should have become aware, prior to the moment it did become aware. CFAR does not claim that it did a good job with Brent. CFAR explicitly acknowledges pretty serious failures.
No one is asking anyone to take for granted that community leaders either [always see], or [never wrongly ignore], harms. That was a strawman. Obviously it is a valid hypothesis that community leaders can fail to see harms, or fail in their response to them. You can tell it’s a valid hypothesis because CFAR is an existence proof of community leaders outright admitting to just such a mistake.
It seems to me that Anna is trying pretty hard, in her above reply, to be open, and legible, and give-as-much-as-she-can without doing harm, herself. I read in Anna’s reply something analogous to the CFAR Brent statement: that, with hindsight, she wishes she had done some things differently, and paid more attention to some concerning signals, but that she did not suppress information, or ignore or downplay evidence of harm once it came clearly to her attention (I say “evidence of harm” rather than “harm” because it’s important to be clear about my epistemic status with regards to this question, which is that I have no idea).
I furthermore see in Anna’s comment evidence that there are non-CFAR-leadership people looking at the situation, and taking action, albeit in a venue that you and I cannot see. It doesn’t sound like anything is being ignored or suppressed.
So insofar as “things that have been alleged” are concerned, I think it boils down to something like:
Either one believes CFAR (in the Brent case) or Anna (above), or one explicitly registers (whether publicly or privately) a claim that they’re lying, or somehow blind or incompetent to a degree tantamount to lying.
Which is a valid hypothesis to hold, to be clear. Right now the whole point of the broader discussion is “are these groups and individuals good or bad, and in what ways?” It’s certainly reasonable to think “I do not believe them.”
But that’s different from “it has been alleged,” and the implication that no response has been given. To the allegation that CFAR leadership ignored Brent, there’s a clear, on-the-record answer from CFAR. To the allegation that CFAR leadership ignored Robert or other similar situations, there’s a clear, on-the-record answer from Anna above (that, yes, is not fully forthright, but that’s because there are other groups already involved in trying to answer these questions and Anna is trying not to violate those conversations nor the involved parties’ privacy).
I think that you might very well have further legitimate beef, à la your statement that “I’m claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing.”
But I think we’re at a point where it’s important to be very clear, and to own one’s accusations clearly (or, if one is not willing to own them clearly, because e.g. one is pursuing them privately, to not leave powerful insinuations in places where they’re very difficult to responsibly answer).
The answer, in both cases given above, seems to me to be, unambiguously:
“No, we did not knowingly tolerate harm.”
If you believe CFAR and/or Anna are lying, then please proceed with that claim, whether publicly or privately.
If you believe CFAR and/or Anna are confused or incompetent, then please proceed with that claim, whether publicly or privately.
But please … actually proceed? Like, start assembling facts, and presenting them here, or presenting them to some trusted third-party arbiter, or whatever. In particular, please do not imply that no answer to the allegations has been given (passive voice). I don’t think that repeating sourceless substanceless claims—
(especially in the Brent case, where all of the facts are in common knowledge and none of them are in dispute at this point)
—after Anna’s already fairly in-depth and doing-its-best-to-cooperate reply, is doing good for anybody in either branch of possibility. It feels like election conspiracy theorists just repeating their allegations for the sake of the power the repetition provides, and never actually getting around to making a legible case.
EDIT: For the record, I was a CFAR employee from 2015 to 2018, and left (for entirely unrelated reasons) right around the same time that the Brent stuff was being resolved. The linked document was in part written with my input, and sufficiently speaks for me on the topic.
If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way).
In most legal enviroment like the US publically accusing someone of being a rapist comes with huge legal risks especially if the relevant evidence only allows 80% credence.
Calling for something like this seems to be in ignorance of the complexity of the relevant dynamics.
Allowing somebody to continue to be an organizer for something after they confess to rape
To fill in some details (I asked Robert, he’s fine with it):
Robert had not confessed to rape, at least not the way I would use the word. He had told me of an incident where (as he told it to me) [edit: the following text is rot13′d, because it contains explicit descriptions of sexual acts] ur naq Wnl unq obgu chg ba pbaqbzf, Wnl unq chg ure zbhgu ba Eboreg’f cravf, naq yngre Eboreg unq chg uvf zbhgu ba Wnl’f cravf jvgubhg nfxvat, naq pbagvahrq sbe nobhg unys n zvahgr orsber abgvpvat fbzrguvat jnf jebat. Wnl sryg genhzngvmrq ol guvf. Eboreg vzzrqvngryl erterggrq vg, naq ernyvmrq ur fubhyq unir nfxrq svefg, naq fubhyq unir abgvprq rneyvre fvtaf bs qvfpbzsbeg.
Robert asked for my help getting better at consent, and I recommended he do a bunch of sessions on consent with a life coach named Matt Porcelli, which he did (he tells me they did not much help); I also had a bunch of conversations with him about consent across several months, but suspect these did at most a small part of what was needed. I did allow him to continue using CFAR’s community space to run (non-CFAR-affiliated) LW events after he told me of this incident. In hindsight I would do a bunch of things differently around these events, particularly asking Jay more questions about how it went, and asking Robert more questions too probably, particularly since in hindsight there were a number of other signs that Robert didn’t have the right skills and character here (e.g., he found it difficult to believe he could refuse hugs; and he’d told me about a previous more minor incident involving Robert giving someone else “permission” to touch Jay’s hair.) My guess in hindsight is that the incident had more warning signs about it than I noticed at the time. But I don’t think “he confessed to rape” is a good description.
(Separately, Somni and Jay later published complaints about Robert that included more than what’s above, after which CFAR asked Robert not to be in CFAR’s community space. Robert and I remained and remain friends.)
(Robert has since worked with an AltJ group that he says actually helped a lot, if it matters, and has shown me writeups and things that leave me thinking he’s taken things pretty seriously and has been slowly acquiring the skills/character he initially lacked. I am inclined to think he has made serious progress, via serious work. But I am definitely not qualified to judge this on behalf of a community; if CFAR ever readmits Robert to community events it will be on someone else’s judgment who seems better at this sort of judgment, not sure who.)
I think it matters a lot whether this is true, and there is widely known evidence that it isn’t true.
If that’s so, then it’s very bad, and I feel like some people should receive a wake-up slap. I live on the opposite side of the planet, and I usually only learn about things after they have already exploded. Sometimes I wonder if anything would be different if I lived where most of the action happens. Generally, it seems like they should import some adults into the Bay Area.
As far as I know, in the Vienna community we do not tolerate this type of behavior. (Anyone feel free to correct me if I am wrong, publicly or privately at your choice.)
I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.
There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don’t even notice that this happens, or maybe feel like it’s not their job to go reminding everyone “hey, don’t blindly trust everyone you meet here”.
I assume the illusion of transparency plays a big role here, where the existing members generally know who is important and who is a nobody, who plays a role in the movement and who is just hanging out there, what kind of behavior is approved and what kind is not… but the new member has no idea about anything, and may assume that if someone acts high-status then the person actually is high-status in the movement, and that whatever such person does has an approval of the community.
To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously—from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them.
Not sure what exactly to do about this, but perhaps the first step could be to write some warnings about this, and read them publicly at the beginning of every public event where new people come. Preferably with specific examples of things that happened in the past; like, not the exact name and place, but the pattern, like “hey, I have a startup that aims to improve the world, wanna code for me this app for free, I will totally donate something to some effective charity, pinky swear”.
I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)
One solution (of a few) that I’m in favor of is to just have more public knowledge about the capabilities and problems of orgs.
I think it’s pretty easy for orgs of about any quality level to seem exciting to new people and recruit them or take advantage of them. Right now, some orgs have poor reputations among those “in the know” (generally for producing poor quality output), but this isn’t made apparent publicly.[1] One solution is to have specialized systems that actually present negative information publicly; this could be public rating or evaluation systems.
This post by Nuno was partially meant as a test for this:
https://forum.effectivealtruism.org/posts/xmmqDdGqNZq5RELer/shallow-evaluations-of-longtermist-organizations
Another thing to do, of course, would be to just do some amounts of evaluation and auditing of all these efforts, above and beyond what even those currently “in the know” have. I think that in the case of Leverage, there really should have been some deep investigation a few years ago, perhaps after a separate setup to flag possible targets of investigation. Back then things were much more disorganized and more poorly funded, but now we’re in a much better position for similar efforts going forward.
[1] I don’t particularly blame them, consider the alternative.
I think the alternative is actually much better than silence!
For example I think the EA Hotel is great and that many “in the know” think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced.
Simply put, if you are actually trying to make a good org, being silently blackballed by those “in the know” is actually not so fun. Of course there are other considerations, such as backlash, but IDK I think transparency is good on all sorts of angles. The opinions of those “in the know” matter; they lead, and I think its better for everyone if that leadership happens in the light.
I think this is more than warranted at this point, yeah. I wonder who might be trusted enough to lead something like that.
I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have (“we think your org isn’t that great, for these reasons”), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don’t have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmakers is a big issue. (And I probably have it much easier than most groups, knowing many of the individuals responsible)
I think the fact that we have so few grantmakers right now is a big bottleneck that I’m sure basically everyone would love to see improved. (The situation isn’t great for current grantmakers, who often have to work long hours). But “figuring out how to scale grantmaking” is a bit of a separate discussion.
Around making the information public specifically, that’s a whole different matter. Imagine the value proposition, “If you apply to this grant, and get turned down, we’ll write about why we don’t like it publically for everyone to see.” Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund.
(Note: I was a guest manager on the LTFF for a few months, earlier this year)
I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.
I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.
That’s good to know.
I imagine grantmakers would be skeptical about people who would say “yes” to an optional form. Like, they say they’re okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.
However, some of our community seems unusually reasonable, so perhaps there’s some way to make it viable.
For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even “better” is probably not at all good enough.
What are “intense” and/or “moral” communities? And, why is it (or is it?) a good thing for a community to be “moral” and/or “intense”?
There are certain goals for which having a moral or intense community is helpful. Whether or not I want to live in such a community I consider it okay for other people to build those communities. On the other hand, building cults is not okay in the same sense.
Intense communities also generally focus on something where otherwise there’s not much focus in society, increase cognitive diversity and are thus able to produce certain kinds of innovations that wouldn’t happen with less cognitive diversity.
I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities.
I’m not making a normative claim about the value of being “moral” and/or “intense”, just saying that I’d expect moral/intense groups to have some of the same characteristics and challenges.
I think it matters a lot whether this is true, and there is widely known evidence that it isn’t true. For example Brent Dill and (if you are willing to believe victims) Robert Lecnik.
Your post is well said and I am also very worried about EA/rat spaces as a fruitful space for predatory actors.
Which thing are you claiming here? I am a bit confused by the double negative (you’re saying there’s “widely known evidence that it isn’t true that representatives don’t even notice when abuse happens”, I think; might you rephrase?).
I’ve made stupid and harmful errors at various time, and e.g. should’ve been much quicker on the uptake about Brent, and asked more questions when Robert brought me info about his having been “bad at consent” as he put it. I don’t wish to be and don’t think I should be one of the main people trying to safeguard victims’ rights; I don’t think I have needed eyes/skill for it. (Separately, I am not putting in the time and effort required to safeguard a community of many hundreds, nor is anyone that I know of, nor do I know if we know how or if there’s much agreement on what kinds of ‘safeguarding’ are even good ideas, so there are whole piles of technical debt and gaps in common knowledge and so on here.)
Nonetheless, I don’t and didn’t view abuse as acceptable, nor did I intend to tolerate serious harms. Parts of Jay’s account of the meeting with me are inaccurate (differ from what I’m really pretty sure I remember, and also from what Robert and his husband said when I asked them for their separate recollections). (From your perspective, I could be lying, in coordination with Robert and his husband who also remember what I remember. But I’ll say my piece anyhow. And I don’t have much of a reputation for lying.) If you want details on how the me/Robert/Jay interaction went as far as I can remember, they’re discussed in a closed FB group with ~130 members that you might be able to join if you ask the mods; I can also paste them in here I guess, although it’s rather personal/detailed stuff about Robert to have on the full-on public googleable internet so maybe I’ll ask his thoughts/preferences first, or I’m interested in others’ thoughts on how the etiquette of this sort of thing ought to go. Or could PM them or something, but then you skip the “group getting to discuss it” part. We at CFAR brought Julia Wise into the discussion last time (not after the original me/Robert/Jay conversation, but after Jay’s later allegations plus Somni’s made it apparent that there was something more serious here), because we figured she was trustworthy and had a decent track record at spotting this kind of thing.
I’m claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing. I think that you are pretty familiar with this view.
I want to point out what is in my mind a clear difference between taking a major role as a safeguard, and failing people who trust you and when the accused confesses to you. You can dispute whether that happened but it’s not as though I am asking you to be held liable for all harms.
If you think this guy raped people (with 80% credence or whatever) then you should probably warn people about him (in a public googleable way). If you don’t think so then you can just say so. Basically, it seems like your willingness to publish this stuff should mostly just depend on how harmful you think this person was.
I’m personally not aware of anything you did with respect to Robert that demonstrates intolerance for serious harms. Allowing somebody to continue to be an organizer for something after they confess to rape qualifies as tolerance of serious harms to me.
Of course my comment here seems litigious—I am not really trying to litigate.
In very plain terms: It has been alleged that CFAR leadership knew that Brent and Robert were committing serious harms and at the very least tolerated it. I take these allegations seriously. Anyone who takes these allegations seriously would obviously be troubled by it being taken for granted that community leaders do not even notice harms taking place.
“It has been alleged” strikes me as not meeting the bar that LW should strive to clear, when dealing with such high stakes, with this much uncertainty.
Allegations come with an alleger attached. If that alleger is someone else (i.e. if you don’t want to tie your own credibility to their account) then it’s good to just … link straight to the source.
If that alleger is you (including if you’re repeating someone else’s allegations because you found them credible enough that you’re adopting them, and repeating them on your own authority), you should be able to state them directly and concretely.
“It has been alleged” is a vague, passive-voice, miasmatic phrase that is really difficult to work with, or think clearly around.
It also implies that these allegations have not been, or cannot be, settled, as questions of fact, or at least probability. It perpetuates a sort of un-pin-downable quality, because as long as the allegations are mist and fog, just floating around absent anyone who’s taking ownership of them, they can’t be conclusively settled or affirmed, and can be repeated forever.
I think it’s pretty bad to lean into a dynamic like that.
In very plain terms: it is the explicit and publicly stated position of CFAR leadership that they were unaware of Brent’s abuses, and that as soon as they became aware of them, they took quick and final action.
In that very statement, you can also find CFAR’s mea culpas re: places where CFAR feels it should have become aware, prior to the moment it did become aware. CFAR does not claim that it did a good job with Brent. CFAR explicitly acknowledges pretty serious failures.
No one is asking anyone to take for granted that community leaders either [always see], or [never wrongly ignore], harms. That was a strawman. Obviously it is a valid hypothesis that community leaders can fail to see harms, or fail in their response to them. You can tell it’s a valid hypothesis because CFAR is an existence proof of community leaders outright admitting to just such a mistake.
It seems to me that Anna is trying pretty hard, in her above reply, to be open, and legible, and give-as-much-as-she-can without doing harm, herself. I read in Anna’s reply something analogous to the CFAR Brent statement: that, with hindsight, she wishes she had done some things differently, and paid more attention to some concerning signals, but that she did not suppress information, or ignore or downplay evidence of harm once it came clearly to her attention (I say “evidence of harm” rather than “harm” because it’s important to be clear about my epistemic status with regards to this question, which is that I have no idea).
I furthermore see in Anna’s comment evidence that there are non-CFAR-leadership people looking at the situation, and taking action, albeit in a venue that you and I cannot see. It doesn’t sound like anything is being ignored or suppressed.
So insofar as “things that have been alleged” are concerned, I think it boils down to something like:
Either one believes CFAR (in the Brent case) or Anna (above), or one explicitly registers (whether publicly or privately) a claim that they’re lying, or somehow blind or incompetent to a degree tantamount to lying.
Which is a valid hypothesis to hold, to be clear. Right now the whole point of the broader discussion is “are these groups and individuals good or bad, and in what ways?” It’s certainly reasonable to think “I do not believe them.”
But that’s different from “it has been alleged,” and the implication that no response has been given. To the allegation that CFAR leadership ignored Brent, there’s a clear, on-the-record answer from CFAR. To the allegation that CFAR leadership ignored Robert or other similar situations, there’s a clear, on-the-record answer from Anna above (that, yes, is not fully forthright, but that’s because there are other groups already involved in trying to answer these questions and Anna is trying not to violate those conversations nor the involved parties’ privacy).
I think that you might very well have further legitimate beef, à la your statement that “I’m claiming that CFAR representatives did in fact notice bad things happening, and that the continuation of bad things happening was not for lack of noticing.”
But I think we’re at a point where it’s important to be very clear, and to own one’s accusations clearly (or, if one is not willing to own them clearly, because e.g. one is pursuing them privately, to not leave powerful insinuations in places where they’re very difficult to responsibly answer).
The answer, in both cases given above, seems to me to be, unambiguously:
“No, we did not knowingly tolerate harm.”
If you believe CFAR and/or Anna are lying, then please proceed with that claim, whether publicly or privately.
If you believe CFAR and/or Anna are confused or incompetent, then please proceed with that claim, whether publicly or privately.
But please … actually proceed? Like, start assembling facts, and presenting them here, or presenting them to some trusted third-party arbiter, or whatever. In particular, please do not imply that no answer to the allegations has been given (passive voice). I don’t think that repeating sourceless substanceless claims—
(especially in the Brent case, where all of the facts are in common knowledge and none of them are in dispute at this point)
—after Anna’s already fairly in-depth and doing-its-best-to-cooperate reply, is doing good for anybody in either branch of possibility. It feels like election conspiracy theorists just repeating their allegations for the sake of the power the repetition provides, and never actually getting around to making a legible case.
EDIT: For the record, I was a CFAR employee from 2015 to 2018, and left (for entirely unrelated reasons) right around the same time that the Brent stuff was being resolved. The linked document was in part written with my input, and sufficiently speaks for me on the topic.
In most legal enviroment like the US publically accusing someone of being a rapist comes with huge legal risks especially if the relevant evidence only allows 80% credence.
Calling for something like this seems to be in ignorance of the complexity of the relevant dynamics.
To fill in some details (I asked Robert, he’s fine with it):
Robert had not confessed to rape, at least not the way I would use the word. He had told me of an incident where (as he told it to me) [edit: the following text is rot13′d, because it contains explicit descriptions of sexual acts] ur naq Wnl unq obgu chg ba pbaqbzf, Wnl unq chg ure zbhgu ba Eboreg’f cravf, naq yngre Eboreg unq chg uvf zbhgu ba Wnl’f cravf jvgubhg nfxvat, naq pbagvahrq sbe nobhg unys n zvahgr orsber abgvpvat fbzrguvat jnf jebat. Wnl sryg genhzngvmrq ol guvf. Eboreg vzzrqvngryl erterggrq vg, naq ernyvmrq ur fubhyq unir nfxrq svefg, naq fubhyq unir abgvprq rneyvre fvtaf bs qvfpbzsbeg.
Robert asked for my help getting better at consent, and I recommended he do a bunch of sessions on consent with a life coach named Matt Porcelli, which he did (he tells me they did not much help); I also had a bunch of conversations with him about consent across several months, but suspect these did at most a small part of what was needed. I did allow him to continue using CFAR’s community space to run (non-CFAR-affiliated) LW events after he told me of this incident. In hindsight I would do a bunch of things differently around these events, particularly asking Jay more questions about how it went, and asking Robert more questions too probably, particularly since in hindsight there were a number of other signs that Robert didn’t have the right skills and character here (e.g., he found it difficult to believe he could refuse hugs; and he’d told me about a previous more minor incident involving Robert giving someone else “permission” to touch Jay’s hair.) My guess in hindsight is that the incident had more warning signs about it than I noticed at the time. But I don’t think “he confessed to rape” is a good description.
(Separately, Somni and Jay later published complaints about Robert that included more than what’s above, after which CFAR asked Robert not to be in CFAR’s community space. Robert and I remained and remain friends.)
(Robert has since worked with an AltJ group that he says actually helped a lot, if it matters, and has shown me writeups and things that leave me thinking he’s taken things pretty seriously and has been slowly acquiring the skills/character he initially lacked. I am inclined to think he has made serious progress, via serious work. But I am definitely not qualified to judge this on behalf of a community; if CFAR ever readmits Robert to community events it will be on someone else’s judgment who seems better at this sort of judgment, not sure who.)
If that’s so, then it’s very bad, and I feel like some people should receive a wake-up slap. I live on the opposite side of the planet, and I usually only learn about things after they have already exploded. Sometimes I wonder if anything would be different if I lived where most of the action happens. Generally, it seems like they should import some adults into the Bay Area.
As far as I know, in the Vienna community we do not tolerate this type of behavior. (Anyone feel free to correct me if I am wrong, publicly or privately at your choice.)