I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.
If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.
I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.
What’s live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) “I will make a fuss” is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn’t illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.
Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don’t want to damage the relationship I have with the person who was pressuring me. I’m unhappy about it, but I still value that relationship. Heck, I haven’t named them. I should note that this person updated (or began reconsidering their position) after Zoe’s post and has since stopped applying any pressure on me/LessWrong.
With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I’ll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don’t know, I don’t fear any particularly terrible retribution myself, but I loathe to make “enemies”.
I’d like to think that I’ve got lots of integrity and will say true things despite pressures and incentives otherwise, but I’m definitely not immune to them.
With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I’ll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don’t know, I don’t fear any particularly terrible retribution myself, but I loathe to make “enemies”.
If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.
Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I’ll be very happy to contribute.
I’m unhappy about it, but I still value that relationship
Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven’t happened yet for similar reasons, the pattern could go unnoticed.
I wonder if there’s an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren’t separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there’s a pattern of subtly suppressing certain information or thoughts, that’s adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.
My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.
I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.
I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.
Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most.
The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.
The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage’s poor reputation over the years.
Like, I could imagine two simplified stories...
Story 1:
Leverage’s early discoveries and methods were very promising, but the inferential gap was high—they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms—if you try to anticipate which objections will be salient to the reader, you’ll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn’t get why Leverage wasn’t investing more into public comms.
Leverage then responded by sharing less and trying to reset its public reputation to ‘normal’. It wasn’t trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.
Story 2:
Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
This was one of the causes of Leverage’s bad reputation, from an early date. Through some combination of ‘people noticing when Leverage bungles a PR operation’ and ‘humans are pretty good at detecting other humans’ character, and picking up on subtle cues that someone is manipulative’.
To what extent is one or the other true? (Another possibility is that there isn’t much of a causal tie between Leverage’s PR obsession and its bad reputation, and they just both occurred for other reasons.)
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement “Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers” rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff’s attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I’d want to know.
Zoe’s report says of the information-sharing agreement “I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing).”
I have spoken to another Leverage member who was asked to sign, and did not.
The email from Matt Fallshaw says the document “was only signed by just over half of you”. Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn’t strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don’t have anything tangible I can point to about that.
I am more confident that what I heard was “Geoff exhibits willingness to lie”. I also wouldn’t be surprised if what I heard was “Geoff reports being willing to lie”. I didn’t tag the information very carefully.
My current feelings are a mixture of the following:
I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that’s kind of cowardly, but I haven’t yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It’s also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
I feel like it’s going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way, and that makes me feel like I have to actively think about how to phrase what I am saying in a way that avoids that pigeonholing effect (as a concrete example, one person approached me who read Ben’s initial comment on the “BayAreaHuman” post that said “I confirm that this is a real person in good standing” as an endorsement of the post, when the comment was really just intended as confirming some facts about the identity of the poster, with basically complete independence from the content of the post)
I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not.
Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand
I assume there isn’t a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.
I similarly feel that I can’t trust the exculpatory or positive evidence about Leverage much so long as I know there’s pressure to withhold negative information. (Including informal NDAs and such.)
On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It’s also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
I agree with this too, and think it’s similarly terrible, but harder to blame any individual for (and harder to fix).
I assume it’s to a large extent an extreme example of the ‘large inferential gaps + true beliefs that sound weird’ afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.
Let’s stand up for the truth regardless of threats from Geoff/Leverage, and let’s stand up for the truth regardless of the mob.
I feel like it’s going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way.
Let’s stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad.
I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not.
I think that it is seen as not very normative on LW to say “I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group”. But IMO its important to surface. Vouching is an important social process.
I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize.
Anna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what’s going through his mind.
This seems like a good approach to me for getting the conversation going. Once people have shared what’s going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion.
All that to say, I think it’s better to hold off on pressuring people or saying their reactions aren’t normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation. Or to be direct, I think your comments in this thread have been brusque/pushy in a way that’s hurting the conversation (others feel free to chime in if that seems wrong to them).
[1] For what it’s worth, I think disclosing that your stance is informed by private info is good and proper.
I think your comments in this thread have been brusque/pushy in a way that’s hurting the conversation (others feel free to chime in if that seems wrong to them).
I mentioned in a different comment that I’ve appreciated some of farp’s comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe’s account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby’s statement that we shouldn’t pressure or judge people who might have something relevant to say.
The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they’d like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I’d personally feel if I did, I think I agree.
You can start seeking truth, and pivot to advocate, as UOC says.
The entire thesis of the post is that you want a mixture of advocacy and mediation in the community. So if your proposal is that we all mediate, and then pivot to advocacy, I think that is not at all what UOC says.
Not that I super endorse the prescription / dichotomy that the post makes to begin with.
I liked Farp’s “Let’s stand up for the truth” comment, and thought it felt appropriate. (I think for different reasons than “mediators and advocates”—I just like people bluntly stating what they think, saying the ‘obvious’, and cheerleading for values that genuinely deserve cheering for. I guess I didn’t expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)
Thanks. Your comments and mayleaf’s do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn’t really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/
I have thought about this UOC post and it has grown on me.
The fact is that I believe Zoe and I believe her experience is not some sort of anomaly. But I am happy to advocate for her just on principle.
Geoff has much more resources and much at stake. Zoe just has (IMO) the truth and bravery and little to gain but peace. Justice for Geoff just doesn’t need my assistance, but justice for Zoe might.
So I am happy to blindly ally with Zoe and any other victims. And yes I would like others to do the same, and broadcast that we will fight for them. Otherwise they are entering a potentially shitty looking fight with little to gain against somebody with everything to lose.
I don’t demand that no mediation take place, but if I want to plant my flag, that’s my business. It’s not like I am doing anything dishonest in the course of my advocacy.
And to be completely frank, as an advocate for the victims, I don’t really want AnnaSalomon to be one of the major mediators here. I don’t think she’s got a good track record with CFAR stuff at all—I have mentioned Robert Lecnik a few times already.
I think Kelsey’s post is right—mediators need to seem impartial. For me, Anna can’t serve this role. I couldn’t say how representative I am.
It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on “already known” properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).
After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, “flinching in anticipation of a high criticism-to-gratitude ratio” is an overall feeling I have when I imagine posting anything on LessWrong.
I was told by friends before posting that I ought to consider the risk to myself and to my contacts of tangible real-world retribution. I don’t have any experience with credible risk of real-world retribution. It feels mind-numbing.
Meta: I haven’t felt fully comfortable describing retribution concerns, including in the post, because I haven’t been able to rule out that revealing the tactical landscape of why I’m sharing or avoiding certain details is simply more information that can be used by Geoff and associates to make life harder for people pursuing clarity. This is easier now that Zoe has written firsthand about specific retribution concerns.
Meta-meta: It doesn’t feel great to talk about all this paranoid adversarial retribution thinking, because I don’t want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.
Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...
I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people’s models.
And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This counts for even more.
I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe’s have looked very good-faith to me.
In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.
I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.
This is probably part of why it was so easy, to systematically play people’s personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.
(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)
To those who had one of the better firsthand experiences of Leverage:
I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.
(And I am glad to see people in this broader thread, being generally open about that detail.)
Meta-meta: It doesn’t feel great to talk about all this paranoid adversarial retribution thinking, because I don’t want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.
I don’t have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.
I will talk about my own bit with Leverage later, but I don’t feel like it’s the right time to share it yet.
(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I’m going to talk about, comes from analogizing this with a different incident.)
A lot of the position I naturally slide into around this, which I have… kind of just embraced, is of trying to relate hard to the people who:
WERE THERE
May have received a lot of good along with the bad
May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
Are very sensitized to condemning mob-speak. Because they’ve been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of “say nothing.” The particular forms of that, will vary.
Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
Also, in any other circumstance: Most people are very happy to condemn people who break strong secrecy agreements that they’ve made. If you feel like you’ve made one, I recognize that this is not easy to defy.
(My own part in this story is small. The only reason I’m semi-comfortable with sharing it, is because I got all of my own “vaguely owning the fact that I broke a very substantial secrecy agreement, publicly, to all my friends” out of the way EARLY. It would be bogging me down like crazy, otherwise. I respect Zoe, and others, for defying comparable pulls, or even worse ones.)
If you’re stuck on this bit, I would like to say: This is an exceptional circumstance. You should maybe talk to somebody, eventually. Maybe only once your own processing has settled down. Publicly might not be the right call for you, and I won’t push for it. Please take care for yourself, and try to be careful to pick someone who is not especially prone to demonizing things.
People can feel their truth drowned out by mobs of uninvested people, condemning it from afar.
The people who know what happened here, are in the minority. They have the most knowledge of what actually happened, and the most skin in this. They are also the people with the most to fear, and the most to lose.
People often don’t appreciate, how much the sheer numbers game can weigh on you. It can come to feel like the chorus is looming over you, in this sort of circumstance; poised, always ready to condemn you and yours from afar. Each individual member is only “speaking-their-truth” once, but in aggregate, they can feel like an army.
It’s hard to keep appropriate sight of the fact that the weight of the people who were there, and their story, are probably worth 1000x as much as even the most coherent but distant and un-invested condemning statement. They will not get as many shares. It might not even qualify as a story! But their contributions are worth a lot more, at least in my mind. Because they were THERE.
And I… want to stick up for them where relevant? Because this one wasn’t my incident, but I know how hard it might be for them to do it for themselves. I can’t swear I will do a good job of it? But the desire is there.
I do think a more-private forum, that is enriched for people who were closer to the event, might be a more comfortable place for some people to recount. It’s part of why I tried to talk up that possibility, in another thread.
...it is unfortunately not my place to make this, though. For various reasons, which feel quite solid, to me.
(And after Ryan’s account? I honestly have some concerns about it getting infiltrated by one of the more manipulative people around Leverage. I don’t want to discount that fear! I still think it might be a good idea?)
I do think we could stand to have a clearer route for things to be shared anonymously, because I suspect at least some people would be more comfortable that way.
(Since “attempts at deanonymization” appears to be a known issue, it may be worth having a flag for “only share as numeric aggregations of >1, using my recounting as a data-point.”)
EDITEDIT: This press release names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted. I feel fewer misgivings around contacting them, than I did around the proposal of contacting Geoff and Larissa to handle this internally.
I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn’t be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative—although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.
And… I want this to go right. It didn’t go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I’m not quite sure how to make this less scary for them? But I want it to be.
The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don’t feel like you’re obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn’t make it who you are.
While I realize I’ve kinda de-facto “taken a side” by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff’s name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.
Compared to Brent’s end-result of “homeless with health-problems in Hawaii” **? The things I’ve asked for have felt mild. But I also knew that if I wasn’t handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.
Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn’t have a position with a substantial amount of power over others (at least while we sort this out), felt to me like fairly weaksauce requests. I am still a bit surprised that Geoff may have taken this as a convincing audition for a “prosecutor” role? I am angry and clued-in enough to sincerely fill the role, if somebody has to and if nobody else will touch it. But it still surprised me, because it is not what I see as my primary responsibility here.
**Despite all his flaws and vices? I was close to Brent. I do care about Brent, and I wouldn’t have wished that for him.
An abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don’t know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)
One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).
True. A maybe not-immediately-obvious possibility: someone playing Aella’s role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter’s reputation still takes a hit; and so, these accounts can be trusted more. If there’s no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster’s ability to track accounter’s trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
(Downvoted. I’d have strong downvoted but −5 seems too harsh. Sounds like you’re responding to something other than what I said, and if that’s right, I don’t like that you said “VERY creepy” about the proposal, rather than about whatever you took from it.)
I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.
I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.
You seem to not understand that I am making this statement, from that place and in that capacity.
Try seeing it through through the lens of that, rather than thinking that I’m making confident statements about your epistemic creepiness.
Depends on the algorithm to determine whether “you seriously lied”.
Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think “well, I am telling the truth, so they will protect my anonymity”, and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for “seriously lying”, your identity is exposed.
If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.
Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I’m imagining that there’s only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won’t be falsely convinced that a truth is an egregious lie.
In particular, the agreement isn’t about whether the court of public opinion decides it was a lie, just the poster’s own opinion. (The poster can’t be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)
In the past, I’ve been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I’ve spoken up I’ve done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago. For lots of reasons, it’s easier, less costly, less risky and easier to not feel fear for me now. I don’t know yet what I’ll say regarding any or all of this related to Leverage because I don’t have any sense of how I might be prompted or provoked to respond. Yet I expect I’ll have more to say and towards what I might share as relevant I don’t have any particular feelings about yet. I’m sensitive to how my statements might impact others but for myself personally I feel almost indifferent.
My general feeling about this is that the information I know is either well-known or otherwise “not my story to tell.”
I’ve had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers. As is common with human interactions, I appreciated many but not all of my interactions.
Like many people in the extended community, I’ve been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it’s been long enough that I can’t track the degree of confidences I’m supposed to keep, and under which conditions, so it seems better to err on the side of silence.
At any rate, it’s ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.
I imagine a lot of people want to say a lot of things about Leverage and the dynamics around it, except it’s difficult or costly/risky or hard-to-imagine-being-heard-about or similar.
If anyone is up for saying a bit about how that is for you personally (about what has you reluctant to try to share stuff to do with Leverage, or with EA/Leverage dynamics or whatever, that in some other sense you wish you could share — whether you had much contact with Leverage or not), I think that would be great and would help open up space.
I’d say err on the side of including the obvious.
I interacted with Leverage some over the years. I felt like they had useful theory and techniques, and was disappointed that it was difficult to get access to their knowledge. I enjoyed their parties. I did a Paradigm workshop. I knew people in Leverage to a casual degree.
What’s live for me now is that when the other recent post about Leverage was published, I was subjected to strong, repeated pressure by someone close to Geoff to have the post marked as flawed, and asked to lean on BayAreaHuman to approximately retract the post or acknowledge its flaws. (This request was made of me in my new capacity as head of LessWrong.) “I will make a fuss” is what I was told. I agreed that the post has flaws (I commented to that effect in the thread) and this made me feel the pressure wasn’t illegitimate despite being unpleasant. Now it seems to be part of a larger concerning pattern.
Further details seem pertinent, but I find myself reluctant to share them (and already apprehensive that this more muted description will have the feared effect) because I just don’t want to damage the relationship I have with the person who was pressuring me. I’m unhappy about it, but I still value that relationship. Heck, I haven’t named them. I should note that this person updated (or began reconsidering their position) after Zoe’s post and has since stopped applying any pressure on me/LessWrong.
With Geoff himself (with whom I personally have had a casual positive relationship) I feel more actual fear of being critical or in anyway taking the side against Leverage. I predict that if I do so, I’ll be placed on the list of adversaries. And something like, just based on the reaction to the Common knowledge post, Leverage is very agenty when it comes to their reputation. Or I don’t know, I don’t fear any particularly terrible retribution myself, but I loathe to make “enemies”.
I’d like to think that I’ve got lots of integrity and will say true things despite pressures and incentives otherwise, but I’m definitely not immune to them.
If you do make enemies in this process, in trying to help us make sense of the situation: count me among the people you can call on to help.
Brainstorming more concrete ideas: if someone makes a GoFundMe to try to offset any financial pressure/punishment Leverage-adjacent people might experience from sharing their stories, I’ll be very happy to contribute.
Positive reinforcement for finding something you could say that (1) protects this sort of value at least somewhat and (2) opens the way for aggregation of the metadata, so to speak; like without your comment, and other hypothetical comments that haven’t happened yet for similar reasons, the pattern could go unnoticed.
I wonder if there’s an extractable social norm / conceptual structure here. Something like separating [the pattern which your friend was participating in] from [your friend as a whole, the person you have a relationship]. Those things aren’t separate exactly, but it feels like it should make sense to think of them separately, e.g. to want to be adversarial towards one but not the other. Like, if there’s a pattern of subtly suppressing certain information or thoughts, that’s adversarial, and we can be agnostic about the structure/location of the agency behind that pattern while still wanting to respond appropriately in the adversarial frame.
My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.
I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.
I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.
Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most.
The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.
The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage’s poor reputation over the years.
Like, I could imagine two simplified stories...
Story 1:
Leverage’s early discoveries and methods were very promising, but the inferential gap was high—they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms—if you try to anticipate which objections will be salient to the reader, you’ll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn’t get why Leverage wasn’t investing more into public comms.
Leverage then responded by sharing less and trying to reset its public reputation to ‘normal’. It wasn’t trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.
Story 2:
Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
This was one of the causes of Leverage’s bad reputation, from an early date. Through some combination of ‘people noticing when Leverage bungles a PR operation’ and ‘humans are pretty good at detecting other humans’ character, and picking up on subtle cues that someone is manipulative’.
To what extent is one or the other true? (Another possibility is that there isn’t much of a causal tie between Leverage’s PR obsession and its bad reputation, and they just both occurred for other reasons.)
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement “Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers” rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff’s attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I’d want to know.
Here is an example:
Zoe’s report says of the information-sharing agreement “I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing).”
I have spoken to another Leverage member who was asked to sign, and did not.
The email from Matt Fallshaw says the document “was only signed by just over half of you”. Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn’t strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don’t have anything tangible I can point to about that.
I am more confident that what I heard was “Geoff exhibits willingness to lie”. I also wouldn’t be surprised if what I heard was “Geoff reports being willing to lie”. I didn’t tag the information very carefully.
My current feelings are a mixture of the following:
I disagree with a lot of the details of what many people have said (both people who had bad experiences and people defending their Leverage experiences and giving positive testimonials), and feel like expressing my take has some chance of making those people feel like their experiences are invalidated, or at least spark some conflict of some type
I know that Geoff and Leverage more broadly in the past have said pretty straightforwardly that they will take pretty adversarial action if someone threatens their reputation or brand, and that makes me both feel like I can trust many fewer things in the discussion, and makes me personally more hesitant to share some things (while also feeling like that’s kind of cowardly, but I haven’t yet had the time to really work through my feelings here, which in itself has some chilling effects that I feel uncomfortable with, etc.)
On the other side, there have been a lot of really vicious and aggressive attacks to anyone saying anything pro-leverage for many years, with a strength that I think is overall even greater and harder to predict than what Geoff and Leverage have been doing. It’s also been more of a crowd-driven phenomenon, which makes it less predictable and more scary.
I feel like it’s going to be really hard to say anything without people pigeonholing me into belonging to some group that is trying to rewrite the rationality social and political landscape some way, and that makes me feel like I have to actively think about how to phrase what I am saying in a way that avoids that pigeonholing effect (as a concrete example, one person approached me who read Ben’s initial comment on the “BayAreaHuman” post that said “I confirm that this is a real person in good standing” as an endorsement of the post, when the comment was really just intended as confirming some facts about the identity of the poster, with basically complete independence from the content of the post)
I myself have access to some sensitive and somewhat confidential information, and am struggling with navigating exactly which parts are OK to share and which ones are not.
I assume there isn’t a public record of this anywhere? Could I hear more details about what was said? This sounds atrocious to me.
I similarly feel that I can’t trust the exculpatory or positive evidence about Leverage much so long as I know there’s pressure to withhold negative information. (Including informal NDAs and such.)
I agree with this too, and think it’s similarly terrible, but harder to blame any individual for (and harder to fix).
I assume it’s to a large extent an extreme example of the ‘large inferential gaps + true beliefs that sound weird’ afflicting a lot of EA orgs, including MIRI. Though if Leverage has been screwed up for a long time, some of that public reaction may also have been watered over the years by true rumors spreading about the org.
Let’s stand up for the truth regardless of threats from Geoff/Leverage, and let’s stand up for the truth regardless of the mob.
Let’s stand up for the truth! Maintaining some aura of neutrality or impartiality at the expense of the truth would be IMO quite obviously bad.
I think that it is seen as not very normative on LW to say “I know things, confidential things I will not share, and because of that I have a very [bad/good] impression of this person or group”. But IMO its important to surface. Vouching is an important social process.
It seems that your account is registered to just participate in this discussion and you withold your personal identity.
If you sincerely believe that information should be shared, why are you withholding yourself and tell other people to take risks?
I have no private information to share. I think there is an obvious difference between asking powerful people in the community to stand up for the truth, and asking some rando commentator to de-anonymize.
Anna is attempting to make people comfortable having this difficult conversation about Leverage by first inviting them just to share what factors are affecting their participation. Oliver is kindly obliging and saying what’s going through his mind.
This seems like a good approach to me for getting the conversation going. Once people have shared what’s going through their minds–and probably these need to received with limited judgmentality–the group can then understand the dynamics at play and figure out how to proceed having a productive discussion.
All that to say, I think it’s better to hold off on pressuring people or saying their reactions aren’t normative [1] in this sub-thread. Generally, I think having this whole conversation well requires a gentleness and patience in the face of the severe, hard-to-talk-about situation. Or to be direct, I think your comments in this thread have been brusque/pushy in a way that’s hurting the conversation (others feel free to chime in if that seems wrong to them).
[1] For what it’s worth, I think disclosing that your stance is informed by private info is good and proper.
I mentioned in a different comment that I’ve appreciated some of farp’s comments here for pushing back against what I see as a missing mood in this conversation (acknowledgment that the events described in Zoe’s account are horrifying, as well as reassurance that people in leadership positions are taking the allegations seriously and might take some actions in response). I also appreciate Ruby’s statement that we shouldn’t pressure or judge people who might have something relevant to say.
The unitofcaring post on mediators and advocates seems relevant here. I interpret farp (edit: not necessarily in the parent comment, but in various other comments in this thread) as saying that they’d like to see more advocacy in this thread instead of just mediation. I am not someone who has any personal experiences to share about Leverage, but if I imagine how I’d personally feel if I did, I think I agree.
On mediators and advocates: I think order-of-operations MATTERS.
You can start seeking truth, and pivot to advocate, as UOC says.
What people often can’t do easily is start with advocate, and pivot to truth.
And with something like this? What you advocated early can do a lot to color both what and who you listen to, and who you hear from.
The entire thesis of the post is that you want a mixture of advocacy and mediation in the community. So if your proposal is that we all mediate, and then pivot to advocacy, I think that is not at all what UOC says.
Not that I super endorse the prescription / dichotomy that the post makes to begin with.
I liked Farp’s “Let’s stand up for the truth” comment, and thought it felt appropriate. (I think for different reasons than “mediators and advocates”—I just like people bluntly stating what they think, saying the ‘obvious’, and cheerleading for values that genuinely deserve cheering for. I guess I didn’t expect Ollie to feel pressured-in-a-bad-way by the comment, even if he disagrees with the implied advice.)
Thanks. Your comments and mayleaf’s do mean a lot to me. Also, I was surprised by negative reaction to that comment and didn’t really expect it to come off as admonishment or pressure. Love 2 cheerlead \o/
I have thought about this UOC post and it has grown on me.
The fact is that I believe Zoe and I believe her experience is not some sort of anomaly. But I am happy to advocate for her just on principle.
Geoff has much more resources and much at stake. Zoe just has (IMO) the truth and bravery and little to gain but peace. Justice for Geoff just doesn’t need my assistance, but justice for Zoe might.
So I am happy to blindly ally with Zoe and any other victims. And yes I would like others to do the same, and broadcast that we will fight for them. Otherwise they are entering a potentially shitty looking fight with little to gain against somebody with everything to lose.
I don’t demand that no mediation take place, but if I want to plant my flag, that’s my business. It’s not like I am doing anything dishonest in the course of my advocacy.
And to be completely frank, as an advocate for the victims, I don’t really want AnnaSalomon to be one of the major mediators here. I don’t think she’s got a good track record with CFAR stuff at all—I have mentioned Robert Lecnik a few times already.
I think Kelsey’s post is right—mediators need to seem impartial. For me, Anna can’t serve this role. I couldn’t say how representative I am.
I will be happy to contribute financially to Zoe’s legal defense, if Geoff decides to take revenge.
In the meanwhile, I am curious about what actually happened. The more people talk, the better.
I appreciate this invitation. I’ll re-link to some things I already said on my own stance: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QKKnepsMoZmmhGSe
Beyond what I laid out there:
It was challenging being aware of multiple stories of harm, and feeling compelled to warn people interacting with Geoff, but not wanting to go public with surprising new claims of harm. (I did mention awareness of severe harm very understatedly in the post. I chose instead to focus on “already known” properties that I feel substantially raise the prior on the actually-observed type of harm, and to disclose in the post that my motivation in cherry-picking those statements was to support pattern-matching to a specific template of harm).
After posting, it was emotionally a bit of a drag to receive comments that complained that the information-sharing attempt was not done well enough, and comparatively few comments grateful for attempting to share what I could, as best I could, to the best of my ability at the time, although the upvote patterns felt encouraging. I was pretty much aware that that was what was going to happen. In general, “flinching in anticipation of a high criticism-to-gratitude ratio” is an overall feeling I have when I imagine posting anything on LessWrong.
I was told by friends before posting that I ought to consider the risk to myself and to my contacts of tangible real-world retribution. I don’t have any experience with credible risk of real-world retribution. It feels mind-numbing.
Meta: I haven’t felt fully comfortable describing retribution concerns, including in the post, because I haven’t been able to rule out that revealing the tactical landscape of why I’m sharing or avoiding certain details is simply more information that can be used by Geoff and associates to make life harder for people pursuing clarity. This is easier now that Zoe has written firsthand about specific retribution concerns.
Meta-meta: It doesn’t feel great to talk about all this paranoid adversarial retribution thinking, because I don’t want to contribute to the spread of paranoia and adversarial thinking. It feels contagious. Zoe describes a very paranoid atmosphere within Leverage and among those who left, and I feel that attesting to a strategically-aware disclosure pattern carries that toxic vibe into new contexts.
Since it sounds like just-upvotes might not be as strong a signal of endorsement as positive engagement...
I want to say that I really appreciate and respect that you were willing to come forward, with facts that were broadly-known in your social graph, but had been systematically excluded from most people’s models.
And you were willing to do this, in a pretty adversarial environment! You had to deal with a small invisible intellectual cold-war that ensured, almost alone, without backing down. This counts for even more.
I do have a little bit of sensitive insider information, and on the basis of that: Both your posts and Zoe’s have looked very good-faith to me.
In a lot of places, they accord with or expand on what I know. There are a few parts I was not close enough to confirm, but they have broadly looked right to me.
I also have a deep appreciation, for Zoe calling out that different corners of Leverage had very different experiences with it. Because they did! Not all time-slices or sub-groups within it experienced the same problems.
This is probably part of why it was so easy, to systematically play people’s personal experiences against each other: Since he knew the context through which Leverage was experienced, Geoff or others could systematically bias whose reports were heard.
(Although I think it will be harder in the future to engage in this kind of bullshit, now that a lot of people are aware of the pattern.)
To those who had one of the better firsthand experiences of Leverage:
I am still interested in hearing your bit! But if you are only engaging with this due to an inducement that probably includes a sampling-bias, I appreciate you including that detail.
(And I am glad to see people in this broader thread, being generally open about that detail.)
I don’t have anything to add, but I just want to say I felt a pronounced pang of warmth/empathy towards you reading this part. Not sure why, something about fear/bravery/aloneless/fog-of-war.
I will talk about my own bit with Leverage later, but I don’t feel like it’s the right time to share it yet.
(But fwiw: I do have some scars, here. I have a little bit of skin in this one. But most of what I’m going to talk about, comes from analogizing this with a different incident.)
A lot of the position I naturally slide into around this, which I have… kind of just embraced, is of trying to relate hard to the people who:
WERE THERE
May have received a lot of good along with the bad
May have developed a very complicated and narratively-unsatisfying opinion because of that, which feels hard to defend
Are very sensitized to condemning mob-speak. Because they’ve been told, again and again, that anything good they got out of the above, will be swept out with the bathwater if the bad comes to light.
This sort of thing only stays covered up for this long, if there was a lot of pressure and plausible-sounding arguments pointing in the direction of “say nothing.” The particular forms of that, will vary.
Core Leverage seems pretty willing to resort to manipulation and threats? And despite me generally trying so hard to avoid this vibe: I want to condemn that outright.
Also, in any other circumstance: Most people are very happy to condemn people who break strong secrecy agreements that they’ve made. If you feel like you’ve made one, I recognize that this is not easy to defy.
(My own part in this story is small. The only reason I’m semi-comfortable with sharing it, is because I got all of my own “vaguely owning the fact that I broke a very substantial secrecy agreement, publicly, to all my friends” out of the way EARLY. It would be bogging me down like crazy, otherwise. I respect Zoe, and others, for defying comparable pulls, or even worse ones.)
If you’re stuck on this bit, I would like to say: This is an exceptional circumstance. You should maybe talk to somebody, eventually. Maybe only once your own processing has settled down. Publicly might not be the right call for you, and I won’t push for it. Please take care for yourself, and try to be careful to pick someone who is not especially prone to demonizing things.
People can feel their truth drowned out by mobs of uninvested people, condemning it from afar.
The people who know what happened here, are in the minority. They have the most knowledge of what actually happened, and the most skin in this. They are also the people with the most to fear, and the most to lose.
People often don’t appreciate, how much the sheer numbers game can weigh on you. It can come to feel like the chorus is looming over you, in this sort of circumstance; poised, always ready to condemn you and yours from afar. Each individual member is only “speaking-their-truth” once, but in aggregate, they can feel like an army.
It’s hard to keep appropriate sight of the fact that the weight of the people who were there, and their story, are probably worth 1000x as much as even the most coherent but distant and un-invested condemning statement. They will not get as many shares. It might not even qualify as a story! But their contributions are worth a lot more, at least in my mind. Because they were THERE.
And I… want to stick up for them where relevant? Because this one wasn’t my incident, but I know how hard it might be for them to do it for themselves. I can’t swear I will do a good job of it? But the desire is there.
I do think a more-private forum, that is enriched for people who were closer to the event, might be a more comfortable place for some people to recount. It’s part of why I tried to talk up that possibility, in another thread.
...it is unfortunately not my place to make this, though. For various reasons, which feel quite solid, to me.
(And after Ryan’s account? I honestly have some concerns about it getting infiltrated by one of the more manipulative people around Leverage. I don’t want to discount that fear! I still think it might be a good idea?)
I do think we could stand to have a clearer route for things to be shared anonymously, because I suspect at least some people would be more comfortable that way.
(Since “attempts at deanonymization” appears to be a known issue, it may be worth having a flag for “only share as numeric aggregations of >1, using my recounting as a data-point.”)
EDITEDIT: This press release names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted. I feel fewer misgivings around contacting them, than I did around the proposal of contacting Geoff and Larissa to handle this internally.
I was once in a similar position, due to my proximity to a past (different) thing. I kinda ended up excruciatingly sensitive, to how some things might read or feel to someone who was close, got a lot of good out of it (with or without the bad), and mostly felt like there was no way their account wouldn’t be twisted into something unrecognizable. And who may be struggling, with processing an abrupt shift in their own personal narrative—although I sincerely hope the 2 years of processing helped to make this less of a thing? But if you are going through it anyway, I am sorry.
And… I want this to go right. It didn’t go right then; not entirely. I think I got yelled at by someone I respect, the first time I opened up about it. I’m not quite sure how to make this less scary for them? But I want it to be.
The people I know who got swept up in this includes some exceptionally nice people. There is at least one of them, who I would ordinarily call exceptionally sane. Please don’t feel like you’re obligated to identify as a bad person, or as a victim, because you were swept up in this. Just because some people might say it about you, doesn’t make it who you are.
While I realize I’ve kinda de-facto “taken a side” by this point (and probably limited who will talk to me as a result)? I was mispronouncing Geoff’s name, before this hit; this is pretty indicative of how little I knew him personally. I started out mostly caring about having the consequences-for-him be reached based off of some kind of reasonable assessment, and not caring too much about having it turn out one way or another. I still feel more invested in there being a good process, and in what will generate the best outcomes for the people who worked under him (or will ever work under him), than anything else.
Compared to Brent’s end-result of “homeless with health-problems in Hawaii” **? The things I’ve asked for have felt mild. But I also knew that if I wasn’t handling mentioning them, somebody else probably would. In my eyes, we probably needed someone outside of the Leverage ecosystem who knew a lot of the story (despite the substantial information-hiding efforts) to be handling this part of the response.
Pushing for people to publish the information-hiding agreement, and proposing that Geoff maybe shouldn’t have a position with a substantial amount of power over others (at least while we sort this out), felt to me like fairly weaksauce requests. I am still a bit surprised that Geoff may have taken this as a convincing audition for a “prosecutor” role? I am angry and clued-in enough to sincerely fill the role, if somebody has to and if nobody else will touch it. But it still surprised me, because it is not what I see as my primary responsibility here.
**Despite all his flaws and vices? I was close to Brent. I do care about Brent, and I wouldn’t have wished that for him.
An abstract note: putting stock in anonymous accounts potentially opens wider a niche for false accounts, because anonymity prevents doing induction about trustworthiness across accounts by one person. (I think anonymity is a great tool to have, and don’t know if this is practically a problem; I just want to track the possibility of this dynamic, and appreciate the additional value of a non-anonymous account.)
One tool here is for a non-anonymous person to vouch for the anonymous person (because they know the person, and/or can independently verify the account).
True. A maybe not-immediately-obvious possibility: someone playing Aella’s role of posting anonymous accounts could offer the following option: if you given an account and take this option, then if the poster later finds out that you seriously lied, then, they have the option to de-anonymize you. The point being, in the hypothetical where the account is egregiously false, the accounter’s reputation still takes a hit; and so, these accounts can be trusted more. If there’s no possibility of de-anonymization, then the account can only be trusted insofar as you trust the poster’s ability to track accounter’s trustworthiness. Which seems like a more complicated+difficult task. (This might be terrible thing to do, IDK.)
I get VERY creepy vibes from this proposal, and want to push back hard on it.
Although, hm… I think “lying” and “enemy action” are different?
Enemy action occasionally warrants breaking contracts back, after they didn’t respect yours.
Whereas if there is ZERO lying-through-negligence in accounts of PERSONAL EXPERIENCES, we can be certain we set the bar-of-entry far too high.
(Downvoted. I’d have strong downvoted but −5 seems too harsh. Sounds like you’re responding to something other than what I said, and if that’s right, I don’t like that you said “VERY creepy” about the proposal, rather than about whatever you took from it.)
I was very up-front about the role I am attempting to embody in this: Relating to, and trying to serve, people with complicated opinions who are finding it hard to talk about this.
I feel we needed someone to take this role. I wish someone had done it for me, when my stuff happened.
You seem to not understand that I am making this statement, from that place and in that capacity.
Try seeing it through through the lens of that, rather than thinking that I’m making confident statements about your epistemic creepiness.
Hopefully this helps to resolve your confusion.
Depends on the algorithm to determine whether “you seriously lied”.
Imagine a hypothetical situation where telling the truth puts you in danger, but you read this offer, think “well, I am telling the truth, so they will protect my anonymity”, and describe truthfully your version. Unluckily for you, your opponent lied, and was more convincing than you. Afterwards, because your story contradicts the accepted version of events, it seems that you were lying, accusing unfairly the people who are deemed innocent. As a punishment for “seriously lying”, your identity is exposed.
If people with sensitive information suspect that something like this could happen, then it defeats the purpose of the proposal.
Yeah, that seems like a big potential flaw. (Which could just mean, no one should stick their neck out like that.) I’m imagining that there’s only potential benefit here in cases where the accounter also has strong trust in the poster, such that they think the poster almost certainly won’t be falsely convinced that a truth is an egregious lie.
In particular, the agreement isn’t about whether the court of public opinion decides it was a lie, just the poster’s own opinion. (The poster can’t be held accountable to that by the public, unless the public changes its mind again, but the poster can at least be held accountable by the accounter.) (We could also worry that this option would only be taken by accounters with accounts that are infeasible to ever reveal as egregious lies, which would be a further selection bias, though this is sort of going down a hypothetical rabbit hole.)
In the past, I’ve been someone who has found it difficult and costly to talk about Leverage and the dynamics around it, or organizations that are or have been affiliated with effective altruism, though the times I’ve spoken up I’ve done more than others. I would have done it more but the costs were that some of my friends in effective altruism interacted with me less, seemed to take me less seriously in general and discouraged me from speaking up more often again with what sometimes amounted to nothing more than peer pressure.
That was a few years ago. For lots of reasons, it’s easier, less costly, less risky and easier to not feel fear for me now. I don’t know yet what I’ll say regarding any or all of this related to Leverage because I don’t have any sense of how I might be prompted or provoked to respond. Yet I expect I’ll have more to say and towards what I might share as relevant I don’t have any particular feelings about yet. I’m sensitive to how my statements might impact others but for myself personally I feel almost indifferent.
My general feeling about this is that the information I know is either well-known or otherwise “not my story to tell.”
I’ve had very few direct interactions with Leverage except applying to Pareto, a party or two, and some interactions with Leverage employees (not Geoff) and volunteers. As is common with human interactions, I appreciated many but not all of my interactions.
Like many people in the extended community, I’ve been exposed to a non-overlapping subset of accounts/secondhand rumors of varying degrees of veracity. For some things it’s been long enough that I can’t track the degree of confidences I’m supposed to keep, and under which conditions, so it seems better to err on the side of silence.
At any rate, it’s ultimately not my story/tragedy. My own interactions with Leverage has not been personally noticeably harmful or beneficial.