The original post and some of the comments seem epistemically low quality to me compared to the typical LessWrong standard. In particular, on top of a lot of insinuations, there are some false facts. This seems especially problematic given that the post is billed as common knowledge.
There’s a lot of dispute and hate directed towards Leverage, which frankly, has made me hesitant to defend it online. However, a friend of mine in the community recently said something to the effect of, “Well, no former Leverage employee has ever defended it on the attack posts, which I take as an indication of silent agreement.”
That rattled me and so I’ve decided to weigh in. I typically stay quiet about Leverage online because I don’t know how to say nuanced or positive things without fear of that blowing back on me personally. For now, I’d ask to remain anonymous, but if it ever seems like people are willing to approach the Leverage topic differently, I intend to put my name on this post. I don’t expect my opinion alone (especially anonymously) to substantially change anything, but I hope it will be considered and incorporated into a coherent and more complete picture.
At a macro level, I had a really positive experience at Leverage. I didn’t feel pressured to do self-improvement or use experimental psychology techniques, and I appreciated the freedom to do independent research. I felt I could (and did on several occasions) opt out of the group-dynamics experiments and training, and was largely free to do my own thing. I learned a lot, became much more curious about the world and willing to form and defend my own views, and met some really amazing people. If I ever have kids and tell them about my bold younger years, I fully expect wholesome Leverage stories to be on the list (with no cult-undertones). I found the people to be kind and thoughtful, and the organization as a whole to be broadly supportive and respectful of my wishes and boundaries. The intellectual environment was incredible and the best of my life. The worst part of my Leverage experience was the negativity I experienced from the EA and rationality communities (for example, receipt of hate mail), and the distance that put between me and people I respect.
Overall, I think my experience really mismatched the picture of Leverage described by OP.
That said, I want to second Freyja’s comment that Leverage was large and pretty decentralized and people’s experiences really differed. I know of at least two former employees who I believe had importantly negative experiences, and that speaks to mistakes made by the organization and its participants. Nonetheless, I think claiming that the OP’s picture above represents common knowledge is importantly wrong and a real disservice to future efforts of rationalists to try to understand Leverage.
On the object level, here are some comments on aspects of the original bullets that didn’t match my experience:
• I didn’t feel encouraged or pressured to live at the office as a new hire. I lived there initially because it made it easier to relocate from the East Coast. I moved out shortly after and no one seemed bothered.
• I didn’t find the information policy I signed overly stringent. I’ve signed confidentiality agreements with multiple normal for-profit companies (that aren’t affiliated with Leverage, EA, or rationality), and this policy was less restrictive than those. It allowed for personal blogs as well as sharing Leverage training techniques and research piecemeal (without approval required). It required permission before publishing the organization’s research online or starting an extended training / coaching relationship with anyone. It also prohibited sharing personal information about hires or information a trainer learned about a client during training / coaching. These rules seemed sensible to me. I had two different outside-of-Leverage romantic partners while I worked at Leverage, and I saw an external counselor. I discussed my experiences at Leverage (and Leverage’s research) with both and didn’t feel I was in violation of the information policy.
• Charting was not the only self-improvement or psychology technique that Leverage researched or used in training. Focusing, IFS, coherence therapy, CBT tools, deliberate practice, TAPs, meditation, and more were also used and incorporated. Individual researchers and trainers also developed and used a number of their own techniques that were not based on charting. The charting technique Geoff initially developed also underwent a number of changes over the years primarily driven by researchers other than Geoff. Leverage’s training and psychology research was not primarily driven by Geoff or predominantly composed of charting.
• I had a good experience with all the training I did and did not experience any form of mental fragmentation. I had one very positive experience in particular, where my social anxiety was significantly and stably lessened afterward. Otherwise I found the training beneficial and better than various other self-improvement tools I’ve tried, but unflashy. I was initially hopeful about larger or faster training successes, but I mostly didn’t experience these; good tools for thinking about how to solve my problems, improving my models, and relating to my feelings reliably helped me, but there was no magic self-improvement sauce.
• It is not true that people were expected to undergo training by their manager. My understanding and experience of the policy and norms were that (1) training / debugging / coaching wasn’t required, (2) if you chose to do training you could choose your trainer (or choose to avoid a particular trainer or set of trainers), (3) “trainer” was a particular job role that did not include being a manager (did not include evaluating performance or determining payroll status), and (4) not all members of the org were trainers or expected to train anyone. Over several years, I switched between trainers several times with no problem and chose to avoid working with certain trainers entirely. (Hedge: there were two smaller training groups where I believe it was a norm for members of the group to train each other. I wasn’t part of those groups and can’t speak to them.)
• 6 types of bodywork were researched (that I know of): a bodywork style from NYU’s acting school, energy work done by Luminous Awareness, body work styles used by two different independent body work practitioners that people recommended, embodiment and movement techniques (for example, the alexander technique, feldenkrais), body-focused introspection (Focusing), and massage (one researcher looked into and pursued massage certification). Touching in all forms I encountered or heard about was minimal and consensual (like a hand on the back), and not all body work involved touch. Several researchers thought body work was ineffective and overblown, and several thought it was effective and useful (among those, some thought the change was obvious and legible and some felt the impacts were confusing or hard to pin down). This was a big source of internal disagreement. While I tend to prefer interventions that are on my priors more credible than body work and energy healing, there are a lot of anecdotal reports of large positive effects from body work and energy healing (like curing chronic pain) and I was glad that some people chose to look into it.
• I did not join Leverage to be a guinea pig for psychological experimentation. I joined because I wanted to research self-improvement techniques and I liked the vision of starting a university for people who wanted to run high impact projects. I was disappointed with how little I learned in college, and I was (and still am) excited about research into different versions of higher education. I thought the training techniques Leverage had were interesting and helpful, but “being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage.
• I did not find the group to be overly focused on “its own sociology.” Most people I interacted with were mostly doing research (including research in the field of history and sociology), ops (accounting, facility maintenance), or training (see tools above), rather than focusing on the group itself. Near the end, there was lots of internal conflict between different teams / subgroups of the organization, which did feel self-indulgent and unhealthy to me. My understanding is that this contributed to the organization being shut down.
• The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,” nor did I believe or feel pressured to believe that the organization would do so. I’ve been told that the original mission was fairly classic EA (improve the world via the most effective interventions), but Leverage took a more abstract reasoning oriented and less data driven approach. I am glad they did this, largely for diversification (though I can see why people object to Leverage taking talent that might otherwise have gone to other EA orgs) and because it led to them running the initial EA Summits. By the time I joined, the stated mission was to improve the world through social science, specifically via research and delivery of useful training and effectiveness techniques, and research into history / sociology, and coordination. This matched my experience of what the org did day-to-day. For most of the years I was there, there was a training team, a sociology team, etc. Within that broad umbrella there was a lot of diversity in what people worked on and what impact they believed their research and Leverage overall would have; I can’t speak to what other individuals privately believed, but OPs claim is false.
• I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.” I continued to be involved in and supportive of EA while working at Leverage, including donating to SENS (pre-recent disputes), GFI, and other orgs. I respected Eliezer, loved HPMOR, was optimistic about MIRI, and thought the raising-the-sanity-waterline goal of the rationality community was great. I also think many hospitals, animal shelters, advocacy groups, and other extremely common interventions and institutions succeed at their missions and contribute to improving the world in many straightforward ways.
• I didn’t believe or feel like I was supposed to believe that Geoff “was among the best and most powerful ‘theorists’ in the world.”
• I did not find “Geoff’s power and prowess as a leader [to be] a central theme.” For most of my years at Leverage 1.0, I interacted primarily with my research, my team, and my team leader; Geoff / Geoff’s leadership was not a major focus for us. In the year before Geoff shut Leverage 1.0 down, Geoff’s leadership was a central theme insofar as he came under criticism internally for not resolving conflict in the group. I think this was hard for all parties involved, and is not best characterized as “his power being a central theme.”
• The comment on Geoff’s dating life (even after OPs edits) still strikes me as misleading. For example, one of the women mentioned was in a long-term relationship with Geoff prior to her joining Leverage. She subsequently applied to work at Leverage and was accepted by a hiring committee in accordance with the recruitment policy at the time; the hiring committee knew she was in a relationship with Geoff which she expected to continue, and considered that in the hiring process. (I communicated with her to make sure she was okay with me posting this bullet, and she also added that she did not consider herself to be a subordinate to Geoff while they were dating.) I believe there’s similar clarifying context in the other cases, though I’m not willing to discuss the details without permission from the others involved. I also want to go on record and apologize for participating in the discussion of someone’s romantic life online, and I’m sorry it’s come to this.
Three final comments:
- I believe that Leverage was great in many ways and I personally benefited a lot from working there, but I also believe it had real problems and made mistakes. I think the discussion in the comments speaks for itself re: that there were negatives associated with Leverage. I view experimenting with self-improvement tools and non-standard organizational structures to generally be risky (but worth having at least some organizations do) and Leverage didn’t handle it delicately in all cases; when I hear of former Leveragers reporting harms, I tend to believe them and find fault with the organization. However, I also think there are generally fewer reports (in the grapevine or formally reported) of harms than are widely believed to exist and less of a picture of the positives.
- Sometimes I have heard members of the rationalist community hear positive reports about Leverage from ex-Leveragers, insinuate that the ex-Leveragers are basically “still brainwashed,” and then ignore the information. This seems epistemically problematic, because it is very hard to respond to. I don’t know if there’s anything I can do about that here, other than try to convey some nuance, and caution that if all Leveragers’ positive experiences are dismissed as brainwashing or cult-member-positivity, it will be very hard to find out any time Leverage-centered gossip is wrong. I’d desperately like the in-person Bay area community to form a more coherent view on Leverage that unifies the positives and negatives, and extracts lessons about self-experimentation, psychology and training research, non-standard company structures, and weird ambitious organizations in general. I don’t see how that will happen if the current discourse around Leverage doesn’t improve substantially, including making the environment more palatable for Leveragers to talk about the positives and negatives of their experience.
- Finally, I expect to respond to comments that seem to me like they’re posted in the spirit of genuine inquiry – please avoid vitriol and insinuations. Sorry for the length of this comment, thanks for bearing with me.
In retrospect, I could’ve done more in my post to emphasize:
Different members report very different experiences of Leverage.
Just because these bullets enumerate what is “known” (and “we all know that we all know”) among “people who were socially adjacent to Leverage when I was around”, does not mean it is 100% accurate or complete. People can “all collectively know” something that ends up being incomplete, misleading, or even basically false.
I think my experience really mismatched the picture of Leverage described by OP.
I fully believe this.
It’s also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.
I don’t advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.
Saying the same thing a different way: The post summarizes an understanding that dozens of people all share. If we’re all collectively wrong, I don’t advocate for a posting standard where the poster somehow determining that we’re wrong, via some method other than soliciting more information in a public forum, is required before coming to a public forum with the best of our current understanding.
I am glad that this post is leading to a broader and more transparent conversation, and more details coming to light. That’s exactly what I wanted to happen. It feels like the path forward, in coming to a better collective understanding.
Thank you again for your clear and helpful contribution.
I don’t advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.
Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to universality (that everyone should assume the content of the post as consensus and only question it if strong counter evidence comes in).
Right now, for someone to disagree with the post, they’re in a position where they’re challenging the “facts” of the situation that “everyone knows”. In contrast I think the reality is that if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.
Completely fair. I’ve removed “facts” from the title, and changed the sub-heading “Facts I’d like to be common knowledge” (which in retrospect is too pushy a framing) to “Facts that are common knowledge among people I know”
I totally and completely endorse and co-sign “if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.”
It feels like the “common knowledge” framing is functioning as some form of evidence claim? “Evidence for the truth of these statements is that lots of people believe them”. And if it’s true that lots of people believe them, that is legitimate Bayesian evidence.
At the same time, it’s kind of hard to engage with and I think saying “everyone knows” make it feel harder to argue with.
A framing I like (although I’m not sure if entirely helps here with ease of engagement) is the “this is what I believe and how I came to believe it” approach, as advocated here. So you’d start of with “I believe Leverage Research 1.0 has many of the properties of a high-demand group such as” proceeding to “I believe this because of X things I observed and Y things that I heard and were corroborated by groups A and B”, etc.
I appreciate hearing clearly what you’d prefer to engage with.
I also feel that this response doesn’t adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people’s desire for privacy.
( … which makes me feel sad, discouraged, and frustrated. It comes across as “why didn’t you just say X”, when there are in fact strong reasons why I couldn’t “just” say X.)
By “tactically adversarial”, I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe’s post goes into more detail about specific fears.
By “desire for privacy”, I mean I can’t publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could’ve only come from one person, because the first-hand sources do not want to be identifiable.
Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is “truly mine to share”.
It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary source. I had to stick to statements that were so generic and “commonly known” that they could not be traced back to any one person without that person’s express permission.
I agree it’s really hard to engage with such statements. In general it’s really hard to make epistemic headway in an environment in which people fear serious personal repercussions and direct retribution for contributing to clarity.
I, too, find the whole epistemic situation frustrating. Frustration was my personal motivation for creating this document; namely that people I spoke to, who were interacting with Geoff in the present day, were totally unaware of any yellow flags at all around Geoff whatsoever.
My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.
I’m very sorry. Despite trying to closely follow this thread, I missed your reply until now.
I also feel that this response doesn’t adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people’s desire for privacy.
You’re right, it doesn’t. I wasn’t that aware or thinking about those elements as much as I could have been. Sorry for that.
It was very difficult for me to create a document that I felt comfortable making public...
It makes sense now that this is the document you ended up writing. I do appreciate you went to the effort to write up a critical document to bring important concerns. It is valuable and important that people do so.
My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.
Hear, hear.
--
If you’ll forgive me suggesting again what you should have written, I’m thinking the adversarial context might have been it. If I had read that you were aware of a number of severe harms that weren’t publicly known, but that you couldn’t say anything more specific because of fears of retribution and the need to protect privacy–that would have been a large and important update to me regarding Leverage. And it might have got a conversation going into the situation to figure out whether and what information was being suppressed.
Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you’re describing would’ve been a tactical error. But I’ll think on this more; I appreciate the input, it lands better this time.
I did write both “I know former members who feel severely harmed” and “I don’t want to become known as someone saying things this organization might find unflattering”. But those are both very, very understated, and purposefully de-emphasized.
It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.
It is not true that people were expected to undergo training by their manager.
OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person’s manager.
“being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage.
OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?
I did not find the group to be overly focused on “its own sociology.”
OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?
The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,”...OPs claim is false.
OK, but you agree that it was was to ensure “global coordination” and “the impossibility of bad governments”, per the plan, right? Do you agree that “the vibe was ‘take over the world’”, per the OP?
I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.”
OK, but would you agree that many staff said this, even if you personally didn’t feel pressured to take the belief on?
I did not find “Geoff’s power and prowess as a leader [to be] a central theme.”
OK, but did you notice staff saying that he was one of the great theorists of our time? Or that a significant part of the hope for the organisation was to deploy adapt certain ideas of his, like connection theory, which “solved psychology” to deal with cases with multiple individuals, in order to design larger orgs, memes, etc?
Hopefully, the answers to these questions could be mostly-separated from our subjective impressions. Which might sound harsh, or resembling a cross-examination. But it seems necessary in order to figure out to what extent we can reach a shared understanding of “common knowledge facts”, at least about different moments in LR’s history (potentially also differing in our interpretations), versus the facts themselves actually being contested.
+1 for the detail. Right now there’s very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.
I agree that the public discussion on the topic has been quite poor.
Hi all, former Leverage 1.0 employee here.
The original post and some of the comments seem epistemically low quality to me compared to the typical LessWrong standard. In particular, on top of a lot of insinuations, there are some false facts. This seems especially problematic given that the post is billed as common knowledge.
There’s a lot of dispute and hate directed towards Leverage, which frankly, has made me hesitant to defend it online. However, a friend of mine in the community recently said something to the effect of, “Well, no former Leverage employee has ever defended it on the attack posts, which I take as an indication of silent agreement.”
That rattled me and so I’ve decided to weigh in. I typically stay quiet about Leverage online because I don’t know how to say nuanced or positive things without fear of that blowing back on me personally. For now, I’d ask to remain anonymous, but if it ever seems like people are willing to approach the Leverage topic differently, I intend to put my name on this post. I don’t expect my opinion alone (especially anonymously) to substantially change anything, but I hope it will be considered and incorporated into a coherent and more complete picture.
At a macro level, I had a really positive experience at Leverage. I didn’t feel pressured to do self-improvement or use experimental psychology techniques, and I appreciated the freedom to do independent research. I felt I could (and did on several occasions) opt out of the group-dynamics experiments and training, and was largely free to do my own thing. I learned a lot, became much more curious about the world and willing to form and defend my own views, and met some really amazing people. If I ever have kids and tell them about my bold younger years, I fully expect wholesome Leverage stories to be on the list (with no cult-undertones). I found the people to be kind and thoughtful, and the organization as a whole to be broadly supportive and respectful of my wishes and boundaries. The intellectual environment was incredible and the best of my life. The worst part of my Leverage experience was the negativity I experienced from the EA and rationality communities (for example, receipt of hate mail), and the distance that put between me and people I respect.
Overall, I think my experience really mismatched the picture of Leverage described by OP.
That said, I want to second Freyja’s comment that Leverage was large and pretty decentralized and people’s experiences really differed. I know of at least two former employees who I believe had importantly negative experiences, and that speaks to mistakes made by the organization and its participants. Nonetheless, I think claiming that the OP’s picture above represents common knowledge is importantly wrong and a real disservice to future efforts of rationalists to try to understand Leverage.
On the object level, here are some comments on aspects of the original bullets that didn’t match my experience:
• I didn’t feel encouraged or pressured to live at the office as a new hire. I lived there initially because it made it easier to relocate from the East Coast. I moved out shortly after and no one seemed bothered.
• I didn’t find the information policy I signed overly stringent. I’ve signed confidentiality agreements with multiple normal for-profit companies (that aren’t affiliated with Leverage, EA, or rationality), and this policy was less restrictive than those. It allowed for personal blogs as well as sharing Leverage training techniques and research piecemeal (without approval required). It required permission before publishing the organization’s research online or starting an extended training / coaching relationship with anyone. It also prohibited sharing personal information about hires or information a trainer learned about a client during training / coaching. These rules seemed sensible to me. I had two different outside-of-Leverage romantic partners while I worked at Leverage, and I saw an external counselor. I discussed my experiences at Leverage (and Leverage’s research) with both and didn’t feel I was in violation of the information policy.
• Charting was not the only self-improvement or psychology technique that Leverage researched or used in training. Focusing, IFS, coherence therapy, CBT tools, deliberate practice, TAPs, meditation, and more were also used and incorporated. Individual researchers and trainers also developed and used a number of their own techniques that were not based on charting. The charting technique Geoff initially developed also underwent a number of changes over the years primarily driven by researchers other than Geoff. Leverage’s training and psychology research was not primarily driven by Geoff or predominantly composed of charting.
• I had a good experience with all the training I did and did not experience any form of mental fragmentation. I had one very positive experience in particular, where my social anxiety was significantly and stably lessened afterward. Otherwise I found the training beneficial and better than various other self-improvement tools I’ve tried, but unflashy. I was initially hopeful about larger or faster training successes, but I mostly didn’t experience these; good tools for thinking about how to solve my problems, improving my models, and relating to my feelings reliably helped me, but there was no magic self-improvement sauce.
• It is not true that people were expected to undergo training by their manager. My understanding and experience of the policy and norms were that (1) training / debugging / coaching wasn’t required, (2) if you chose to do training you could choose your trainer (or choose to avoid a particular trainer or set of trainers), (3) “trainer” was a particular job role that did not include being a manager (did not include evaluating performance or determining payroll status), and (4) not all members of the org were trainers or expected to train anyone. Over several years, I switched between trainers several times with no problem and chose to avoid working with certain trainers entirely. (Hedge: there were two smaller training groups where I believe it was a norm for members of the group to train each other. I wasn’t part of those groups and can’t speak to them.)
• 6 types of bodywork were researched (that I know of): a bodywork style from NYU’s acting school, energy work done by Luminous Awareness, body work styles used by two different independent body work practitioners that people recommended, embodiment and movement techniques (for example, the alexander technique, feldenkrais), body-focused introspection (Focusing), and massage (one researcher looked into and pursued massage certification). Touching in all forms I encountered or heard about was minimal and consensual (like a hand on the back), and not all body work involved touch. Several researchers thought body work was ineffective and overblown, and several thought it was effective and useful (among those, some thought the change was obvious and legible and some felt the impacts were confusing or hard to pin down). This was a big source of internal disagreement. While I tend to prefer interventions that are on my priors more credible than body work and energy healing, there are a lot of anecdotal reports of large positive effects from body work and energy healing (like curing chronic pain) and I was glad that some people chose to look into it.
• I did not join Leverage to be a guinea pig for psychological experimentation. I joined because I wanted to research self-improvement techniques and I liked the vision of starting a university for people who wanted to run high impact projects. I was disappointed with how little I learned in college, and I was (and still am) excited about research into different versions of higher education. I thought the training techniques Leverage had were interesting and helpful, but “being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage.
• I did not find the group to be overly focused on “its own sociology.” Most people I interacted with were mostly doing research (including research in the field of history and sociology), ops (accounting, facility maintenance), or training (see tools above), rather than focusing on the group itself. Near the end, there was lots of internal conflict between different teams / subgroups of the organization, which did feel self-indulgent and unhealthy to me. My understanding is that this contributed to the organization being shut down.
• The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,” nor did I believe or feel pressured to believe that the organization would do so. I’ve been told that the original mission was fairly classic EA (improve the world via the most effective interventions), but Leverage took a more abstract reasoning oriented and less data driven approach. I am glad they did this, largely for diversification (though I can see why people object to Leverage taking talent that might otherwise have gone to other EA orgs) and because it led to them running the initial EA Summits. By the time I joined, the stated mission was to improve the world through social science, specifically via research and delivery of useful training and effectiveness techniques, and research into history / sociology, and coordination. This matched my experience of what the org did day-to-day. For most of the years I was there, there was a training team, a sociology team, etc. Within that broad umbrella there was a lot of diversity in what people worked on and what impact they believed their research and Leverage overall would have; I can’t speak to what other individuals privately believed, but OPs claim is false.
• I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.” I continued to be involved in and supportive of EA while working at Leverage, including donating to SENS (pre-recent disputes), GFI, and other orgs. I respected Eliezer, loved HPMOR, was optimistic about MIRI, and thought the raising-the-sanity-waterline goal of the rationality community was great. I also think many hospitals, animal shelters, advocacy groups, and other extremely common interventions and institutions succeed at their missions and contribute to improving the world in many straightforward ways.
• I didn’t believe or feel like I was supposed to believe that Geoff “was among the best and most powerful ‘theorists’ in the world.”
• I did not find “Geoff’s power and prowess as a leader [to be] a central theme.” For most of my years at Leverage 1.0, I interacted primarily with my research, my team, and my team leader; Geoff / Geoff’s leadership was not a major focus for us. In the year before Geoff shut Leverage 1.0 down, Geoff’s leadership was a central theme insofar as he came under criticism internally for not resolving conflict in the group. I think this was hard for all parties involved, and is not best characterized as “his power being a central theme.”
• The comment on Geoff’s dating life (even after OPs edits) still strikes me as misleading. For example, one of the women mentioned was in a long-term relationship with Geoff prior to her joining Leverage. She subsequently applied to work at Leverage and was accepted by a hiring committee in accordance with the recruitment policy at the time; the hiring committee knew she was in a relationship with Geoff which she expected to continue, and considered that in the hiring process. (I communicated with her to make sure she was okay with me posting this bullet, and she also added that she did not consider herself to be a subordinate to Geoff while they were dating.) I believe there’s similar clarifying context in the other cases, though I’m not willing to discuss the details without permission from the others involved. I also want to go on record and apologize for participating in the discussion of someone’s romantic life online, and I’m sorry it’s come to this.
Three final comments:
- I believe that Leverage was great in many ways and I personally benefited a lot from working there, but I also believe it had real problems and made mistakes. I think the discussion in the comments speaks for itself re: that there were negatives associated with Leverage. I view experimenting with self-improvement tools and non-standard organizational structures to generally be risky (but worth having at least some organizations do) and Leverage didn’t handle it delicately in all cases; when I hear of former Leveragers reporting harms, I tend to believe them and find fault with the organization. However, I also think there are generally fewer reports (in the grapevine or formally reported) of harms than are widely believed to exist and less of a picture of the positives.
- Sometimes I have heard members of the rationalist community hear positive reports about Leverage from ex-Leveragers, insinuate that the ex-Leveragers are basically “still brainwashed,” and then ignore the information. This seems epistemically problematic, because it is very hard to respond to. I don’t know if there’s anything I can do about that here, other than try to convey some nuance, and caution that if all Leveragers’ positive experiences are dismissed as brainwashing or cult-member-positivity, it will be very hard to find out any time Leverage-centered gossip is wrong. I’d desperately like the in-person Bay area community to form a more coherent view on Leverage that unifies the positives and negatives, and extracts lessons about self-experimentation, psychology and training research, non-standard company structures, and weird ambitious organizations in general. I don’t see how that will happen if the current discourse around Leverage doesn’t improve substantially, including making the environment more palatable for Leveragers to talk about the positives and negatives of their experience.
- Finally, I expect to respond to comments that seem to me like they’re posted in the spirit of genuine inquiry – please avoid vitriol and insinuations. Sorry for the length of this comment, thanks for bearing with me.
Thank you for this.
In retrospect, I could’ve done more in my post to emphasize:
Different members report very different experiences of Leverage.
Just because these bullets enumerate what is “known” (and “we all know that we all know”) among “people who were socially adjacent to Leverage when I was around”, does not mean it is 100% accurate or complete. People can “all collectively know” something that ends up being incomplete, misleading, or even basically false.
I fully believe this.
It’s also true that I had at least 3 former members, plus a large handful of socially-adjacent people, look over the post, and they all affirmed that what I had written was true to their experience; fairly obvious or uncontroversial; and they expected would be held to be true by dozens of people. Comments on this post attest to this, as well.
I don’t advocate for an epistemic standard in which a single person, doing anything less than a singlehanded investigative journalistic dive, is expected to do more than that, epistemic-verification-wise, before sharing their current understanding publicly and soliciting more information in the comments.
Saying the same thing a different way: The post summarizes an understanding that dozens of people all share. If we’re all collectively wrong, I don’t advocate for a posting standard where the poster somehow determining that we’re wrong, via some method other than soliciting more information in a public forum, is required before coming to a public forum with the best of our current understanding.
I am glad that this post is leading to a broader and more transparent conversation, and more details coming to light. That’s exactly what I wanted to happen. It feels like the path forward, in coming to a better collective understanding.
Thank you again for your clear and helpful contribution.
Sure, but you called the post “Common Knowledge Facts”. If you’d called the post “Me and my friends’ beliefs about Leverage 1.0” or “Basic claims I believe about Leverage 1.0” then that would IMO be a better match for the content and less so claim to universality (that everyone should assume the content of the post as consensus and only question it if strong counter evidence comes in).
Right now, for someone to disagree with the post, they’re in a position where they’re challenging the “facts” of the situation that “everyone knows”. In contrast I think the reality is that if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.
Completely fair. I’ve removed “facts” from the title, and changed the sub-heading “Facts I’d like to be common knowledge” (which in retrospect is too pushy a framing) to “Facts that are common knowledge among people I know”
I totally and completely endorse and co-sign “if people bring forward their personal impressions as different to the OP, this should in large part be treated as more data, and not a challenge.”
Appreciate you editing the post, that seems like an improvement to me.
It feels like the “common knowledge” framing is functioning as some form of evidence claim? “Evidence for the truth of these statements is that lots of people believe them”. And if it’s true that lots of people believe them, that is legitimate Bayesian evidence.
At the same time, it’s kind of hard to engage with and I think saying “everyone knows” make it feel harder to argue with.
A framing I like (although I’m not sure if entirely helps here with ease of engagement) is the “this is what I believe and how I came to believe it” approach, as advocated here. So you’d start of with “I believe Leverage Research 1.0 has many of the properties of a high-demand group such as” proceeding to “I believe this because of X things I observed and Y things that I heard and were corroborated by groups A and B”, etc.
I appreciate hearing clearly what you’d prefer to engage with.
I also feel that this response doesn’t adequately acknowledge how tactically adversarial this context is, and how hard it is to navigate people’s desire for privacy.
( … which makes me feel sad, discouraged, and frustrated. It comes across as “why didn’t you just say X”, when there are in fact strong reasons why I couldn’t “just” say X.)
By “tactically adversarial”, I mean that Geoff has an incredibly strong incentive to suppress clarity, and make life harder for people contributing to clarity. Zoe’s post goes into more detail about specific fears.
By “desire for privacy”, I mean I can’t publicly lay out a legible map of where I got information from, or even make claims that are specific enough that they could’ve only come from one person, because the first-hand sources do not want to be identifiable.
Unlike former members, Pareto fellows, workshop attendees, and other similar commenters here, I did not personally experience anything first-hand that is “truly mine to share”.
It was very difficult for me to create a document that I felt comfortable making public, without feeling I was compromising the identity of any primary source. I had to stick to statements that were so generic and “commonly known” that they could not be traced back to any one person without that person’s express permission.
I agree it’s really hard to engage with such statements. In general it’s really hard to make epistemic headway in an environment in which people fear serious personal repercussions and direct retribution for contributing to clarity.
I, too, find the whole epistemic situation frustrating. Frustration was my personal motivation for creating this document; namely that people I spoke to, who were interacting with Geoff in the present day, were totally unaware of any yellow flags at all around Geoff whatsoever.
My hope is that inch by inch, step by step, more and more truth and clarity can come out, as more and more people become comfortable sharing their personal experience.
I’m very sorry. Despite trying to closely follow this thread, I missed your reply until now.
You’re right, it doesn’t. I wasn’t that aware or thinking about those elements as much as I could have been. Sorry for that.
It makes sense now that this is the document you ended up writing. I do appreciate you went to the effort to write up a critical document to bring important concerns. It is valuable and important that people do so.
Hear, hear.
--
If you’ll forgive me suggesting again what you should have written, I’m thinking the adversarial context might have been it. If I had read that you were aware of a number of severe harms that weren’t publicly known, but that you couldn’t say anything more specific because of fears of retribution and the need to protect privacy–that would have been a large and important update to me regarding Leverage. And it might have got a conversation going into the situation to figure out whether and what information was being suppressed.
But it’s easier to say that in hindsight.
Thanks, this all helps. At the time, I felt that writing this with the meta-disclosures you’re describing would’ve been a tactical error. But I’ll think on this more; I appreciate the input, it lands better this time.
I did write both “I know former members who feel severely harmed” and “I don’t want to become known as someone saying things this organization might find unflattering”. But those are both very, very understated, and purposefully de-emphasized.
Another former Leverage employee here. I agree with the bullet points in Prevlev’s post. And my experience of Leverage broadly matches theirs.
This is great, and straightforward, and I’m glad you joined the conversation. Thank you.
It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.
OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person’s manager.
OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?
OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?
OK, but you agree that it was was to ensure “global coordination” and “the impossibility of bad governments”, per the plan, right? Do you agree that “the vibe was ‘take over the world’”, per the OP?
OK, but would you agree that many staff said this, even if you personally didn’t feel pressured to take the belief on?
OK, but did you notice staff saying that he was one of the great theorists of our time? Or that a significant part of the hope for the organisation was to deploy adapt certain ideas of his, like connection theory, which “solved psychology” to deal with cases with multiple individuals, in order to design larger orgs, memes, etc?
Hopefully, the answers to these questions could be mostly-separated from our subjective impressions. Which might sound harsh, or resembling a cross-examination. But it seems necessary in order to figure out to what extent we can reach a shared understanding of “common knowledge facts”, at least about different moments in LR’s history (potentially also differing in our interpretations), versus the facts themselves actually being contested.
+1 for the detail. Right now there’s very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.
I agree that the public discussion on the topic has been quite poor.