My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.
I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.
I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.
Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most.
The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.
The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage’s poor reputation over the years.
Like, I could imagine two simplified stories...
Story 1:
Leverage’s early discoveries and methods were very promising, but the inferential gap was high—they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms—if you try to anticipate which objections will be salient to the reader, you’ll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn’t get why Leverage wasn’t investing more into public comms.
Leverage then responded by sharing less and trying to reset its public reputation to ‘normal’. It wasn’t trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.
Story 2:
Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
This was one of the causes of Leverage’s bad reputation, from an early date. Through some combination of ‘people noticing when Leverage bungles a PR operation’ and ‘humans are pretty good at detecting other humans’ character, and picking up on subtle cues that someone is manipulative’.
To what extent is one or the other true? (Another possibility is that there isn’t much of a causal tie between Leverage’s PR obsession and its bad reputation, and they just both occurred for other reasons.)
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement “Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers” rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff’s attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I’d want to know.
Zoe’s report says of the information-sharing agreement “I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing).”
I have spoken to another Leverage member who was asked to sign, and did not.
The email from Matt Fallshaw says the document “was only signed by just over half of you”. Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn’t strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don’t have anything tangible I can point to about that.
I am more confident that what I heard was “Geoff exhibits willingness to lie”. I also wouldn’t be surprised if what I heard was “Geoff reports being willing to lie”. I didn’t tag the information very carefully.
My contact with Leverage over the years was fairly insignificant, which is part of why I don’t feel like it’s right for me to participate in this discussion. But there are some things that have come to mind, and since Anna’s made space for that, I’ll note them now. I still think it’s not really my place to say anything, but here’s my piece anyway. I’m speaking only for myself and my own experience.
I interviewed for an ops position at Leverage/Paradigm in early 2017, when I was still in college. The process took maybe a couple months, and the in-person interview happened the same week as my CFAR workshop; together these were my first contact with the Bay community. Some of the other rationalists I met that week warned me against Leverage in vague terms; I discussed their allegations with the ops team at my interview and came away feeling satisfied that both sides had a point.
I had a positive experience at the interview and with the ops and their team hiring process in general. The ops lead seemed to really believe in me and recommended me to other EA orgs after I didn’t get hired at Paradigm, and that was great. My (short-term) college boyfriend had a good relationship with Leverage and later worked at Paradigm. In mid-2017 I met a Leverage employee in a non-Leverage context and we went on a couple dates; that ended amicably. All that’s just to say that at that point, I thought I had a fairly positive relationship with them.
Then, Leverage/Paradigm put on EA Summit in the summer of 2018. I applied to attend and was rejected. My boyfriend, who I think attended a Paradigm workshop around that time, managed to get that decision reversed, but told me that I was rejected because I was on a list of people who might speak ill of Leverage. That really rubbed me the wrong way. I didn’t think I had ever acted in a way to deserve that, and it seemed bad to me that they were so paranoid about their reputation that they would reject large swaths of people from a conference that’s supposed to bring together EAs from around the world, just because of vague suspicion. Ironically that’s the personal experience that led me to distrust Leverage the most.
The bottom line being that discussions around Leverage’s reputation have always been really fraught and murky, and it’s totally understandable to me that people would fear unknown repercussions for discussing Leverage in public. Many other people in these threads have said that in various ways, but there’s my concrete example.
The obsession with reputation control is super concerning to me, and I wonder how this connects up with Leverage’s poor reputation over the years.
Like, I could imagine two simplified stories...
Story 1:
Leverage’s early discoveries and methods were very promising, but the inferential gap was high—they really needed a back-and-forth with someone to properly communicate, because everyone had such different objections and epistemic starting points. (This is exactly the trouble MIRI had in its early comms—if you try to anticipate which objections will be salient to the reader, you’ll usually miss the mark. And if you do this a lot, you miss the mark and are long-winded.)
Because of this inferential gap, Leverage acquired a very bad reputation with a bunch of people who (a) misunderstood its reasoning, and then (b) didn’t get why Leverage wasn’t investing more into public comms.
Leverage then responded by sharing less and trying to reset its public reputation to ‘normal’. It wasn’t trying to become super high-status, just trying to undo the damage already done / prevent things from further degrading as rumors mutated over time. Unfortunately, its approach was heavy-handed and incompetent, and backfired.
Story 2:
Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers.
This was one of the causes of Leverage’s bad reputation, from an early date. Through some combination of ‘people noticing when Leverage bungles a PR operation’ and ‘humans are pretty good at detecting other humans’ character, and picking up on subtle cues that someone is manipulative’.
To what extent is one or the other true? (Another possibility is that there isn’t much of a causal tie between Leverage’s PR obsession and its bad reputation, and they just both occurred for other reasons.)
Based on broad-strokes summaries said to me by ex-Leveragers (though admittedly not first-hand experience), I would say that the statement “Leverage was always unusually obsessed with its reputation, and unusually manipulative / epistemically uncooperative with non-Leveragers” rings true to what I have heard.
Some things mentioned to me by Leverage people as typical/archetypal of Geoff’s attitude include being willing to lie to people outside Leverage, feeling attacked or at risk of being attacked, and viewing adjacent non-Leverage groups within the broader EA sphere as enemies.
Thanks! To check: did one or more of the ex-Leveragers say Geoff said he was willing to lie? Do you have any detail you can add there? The lying one surprises me more than the others, and is something I’d want to know.
Here is an example:
Zoe’s report says of the information-sharing agreement “I am the only person from Leverage who did not sign this, according to Geoff who asked me at least three times to do so, mentioning each time that everyone else had (which read to me like an attempt to pressure me into signing).”
I have spoken to another Leverage member who was asked to sign, and did not.
The email from Matt Fallshaw says the document “was only signed by just over half of you”. Note the recipients list includes people (such as Kerry Vaughan) who were probably never asked to sign because they were not present, but I would believe that such people are in the minority; so this isn’t strict confirmation, but just increased likelihood, that Geoff was lying to Zoe.
This is lying to someone within the project. I would subjectively anticipate higher willingness to lie to people outside the project, but I don’t have anything tangible I can point to about that.
I am more confident that what I heard was “Geoff exhibits willingness to lie”. I also wouldn’t be surprised if what I heard was “Geoff reports being willing to lie”. I didn’t tag the information very carefully.