Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.
Preamble
Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.
This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.
And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.
(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)
These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.
(I only have about 3-6 relevant contacts at the moment, but I’ve gotten at least 2 points of confirmation on each of these. It was not in this exact wording, though.)
Timeline Pieces
Early L1-present: Leverage has always had weird levels of PR-protective secret-keeping stuff, as far back as I can remember (~2015)
I believe this may have been true, since before they probably had anything worth hiding? Not confident in that, though.
Early L1: Leverage runs first EA Global (2013)
It brought a lot of important early EA people together, and a lot of them left it on friendlier terms
They also ran the 2014 one, but the character was pretty different; less “people crowded in a single house” more “conference + summit”
Early-L1: Panel discussion between Geoff & Anna Salamon (2014)
Brought up a few of Geoff’s non-materialist views
Seems to have been some sort of turning-point that deepened a rift between Geoff and Leverage vs the Rationalist and EA philosophies and communities
???: Sense of a developing rift between Leverage and EA
Started out relatively friendly, in first EAG era
Drifted apart due to a mix of philosophical differences and some growing anti-Leverage reputational dynamics
Geoff and other Leveragers were largely blocked from collaborations with EA orgs, and felt pretty shunned
Mid-L1: Attempt by a 1-2 Leverage-aligned people to use CEA as a Leverage recruitment vehicle. This escalated for a while, eventually became such a problem that they were rebuffed and fired.
Geoff was aware of this
Oli was pretty burned out, scared, and depressed during this period. The Leverage drama contributed to that, although it was not the sole reason.
I was dating Oli (habryka). Oli didn’t give me very much detail for a long time, but I could pick up on the fact that he was scared by this component of it, and he did tell me some of it eventually.
I have not seen Oli get this scared very often? So this does feel almost-personal for me, and I get pretty incensed about this.
Late L1: While worried about defunding, different factions of psychology did stuff towards each other that… bordered on psychological warfare?
Some nebulous mix of “actually fucking with each other,” “claiming/convincing-self that the other faction fucked with them,” and “trying to enforce unreasonable/unhealthy norms on each other.”
Different factions and splinter-groups often had different degrees and specifics of dysfunction.
(I’d like a more Kosher wording for this! But I’m not sure how else to express just how bad it got.)
Late L1/post-L1: At least one faction freaked the hell out about social contagion, and engaged in a lot of dysfunctional pressuring of others as a result
I happen to think that social contagion is a useful model? But that they failed to factor in enough “it all adds up to normality,” and got pretty arrogant about their models and personal competence, in a way that did not end well.
End of L1: Leverage 1.0 was dissolved
My personal take is that dissolving and restructuring was overall a good move
As much as I sympathize with the intent, I am not always a big fan of the legacy and specifics of the information-hiding agreement (I’ll pick that fight later, though.)
post-L1: A few non-Leverage people who were close to someone in one of the psychology factions, experienced some nasty second-degree fallout drama when their ex-Leverager friend started claiming they were {infected with objects, had attacked the Leverager with their aura, etc.}.
This is the short version of my story? I experienced one of these second-degree* echoes. See “My Story” below.
I reported the 10-second-version of my story to Anna and MattF in an anonymized text snippet, shortly after Zoe posted.
I have discovered others who were affected by some second-degree fallout drama, but the exact stories differ.
My Story
I was sworn into an intense secrecy agreement (“do not tell anyone, even if they will get hurt if they don’t know”), and told by a friend that I “contained an object” (“object” is basically their confusing term for a psychologically-unhealthy memetic thing). The Ziz/CFAR incident hit the same damn day. I responded by requesting an outside-view sanity-check from Oli**. I told my friend that I’d told someone, as soon as I got back, and shit hit the fan.
I sometimes call my incident the “quarantine-before-quarantine?” I responded to things like “getting told that I’m not allowed to email partner-of-friend because that gave partner an object” and “friend vents that they can’t visit friend’s-partner, because they caught an object from me, and now they have to wait until someone has an opening for about an hour of bodywork” and “friend says they felt me attack them, while I was just eating breakfast” by generating and following monotonically-increasing explicit rules-of-separation, until we were living in the same house but were not allowed to talk or even be in the same room together. We both moved out a month later. The whole thing is a long story, but basically, friend and I had a gigantic fallout as a result of all of this.
I was mum for a long time about everything except “I broke a major secrecy agreement” and “friend & I are not even able to calmly co-exist in the same house anymore,” because the friend had made it clear that talking honestly about any of this would render to them as “throwing them under the bus.” I do genuinely still care about their well-being.
If you know who I am talking about, do not reveal it publicly and please be nice to them. What they went through was even worse than what I experienced.
Same goes for friend’s-partner, who always struck me as swept-up and misguided, not malicious. They were the route by which this insane frame reached me, but I genuinely believe that they meant well. There were even times when the partner was more charitable towards me, than ex-friend was.
I distantly wish both of them well. I also do not wish to speak privately with either of them about this, at present.
On Centers of Dysfunction
In terms of clusters-of-dysfunction:
I think early-L1 was generally less-dysfunctional, although the culture had many of what I would think of as “risk-factors.”
The worst stuff seems to have reached a head right near the end of L1?
Reserve was one of the more-functional corners, and was basically fine.
I worked at Reserve for a while; you could pick up on some of the “taste of Leverage” from that distance, but it never appears to have escalated to anything seriously dysfunctional.
For example: my worst complaint is that C was weirdly-intense about not granting me access to the #general Slack channel, even though that could get in the way of doing my job sometimes
In general, what I’ve seen seems consistent with some ex-Leveragers getting a substantial amount of good out of the experience. Sometimes with none of the bad, sometimes alongside it.
ex: I recognize that bodywork was very helpful to my ex-friend, in working through some of their (unrelated) trauma. Many people have reported good experiences with belief-reporting, and say they found it useful.
I also think there were a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances.
It can be hard to judge this, especially from where I am? But as bad as the things that happened were, I think this is broadly true of most of the people involved.
I do not want to put people through the sort of mob-driven invalidation, that I once felt.
I was once friends with Brent. I still care about his well-being. There are times where I was under a lot of pressure to write that out of my personal narrative, but it was ultimately healthier for me that I chose to keep it.
I hope that those with stories about Leverage that are different from mine, feel the right to lay claim to the positives of their experiences, as well as the negatives.
Footnotes
* Technically, my friend was dating an ex-Leverager. So I actually got a third-degree burn.
** I told Oli something to the effect of “Mental illness as social contagion theory; claimed to be spread highly-effectively through circling. Not sure if Ziz incident may be an instance? If there’s another psychotic break within 1 month, boost likelihood of this being true. If there’s not, please update downward on this model.*** Pieces of Leverage’s model of social contagion did not match my own theory of social contagion, and I’m not entirely confident who is in the right, here? Also, this one may have come out of Leverage, but they say it was an accident.”
(...that is probably roughly everything I said? It was succinct, in part because I was taking the possibility that I had caught something, seriously. In my theory of social contagion, bandwidth really matters. Some Leveragers behaved in a way that implied thinking bandwidth mattered less, and this was one of the first things -of several- that struck me as insane about their lens on it.)
*** Ziz turned out to be already-crazy as a baseline. There were no psychosis episodes that month from anyone else. I asked around, and I do not believe Ziz had any strong connection to Leverage at all, but especially not in that time-period.
By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
Someone, probably Geoff (it’s apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people’s positive experiences.
I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don’t, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
Matt Falshaw’s recent email mentioned non-Zoe people writing “disingenuous and deliberately misleading” posts in the past? If that was meant to implicate BAH, then I think it was being a bit “disingenuous and deliberately misleading.”
Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
Audio was recovered, and there’s a transcript here of the second half for the less audio-inclined.
Geoff’s initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that “Leverage was poaching donors,” which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
Some decent higher-detail text summaries here and here.
TekhneMakre started a thread with some good additional thoughts, here
“reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy” (w/ details)
“we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research”
Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
“Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict”
Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other’s nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
I don’t think all of the problems were structural? But a lot of them were, and the ones that weren’t were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
Something I felt wasn’t handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.
I have some difficulty understanding the descriptions by former Leverage members. Inferential distance, but even if you tell me what the words refer to, I am not sure I am painting my near-mode picture correctly. Like, when you say “bodywork”, now I imagine something like one person giving the other person a massage, where both participants believe that this action not only relaxes the body, but also helps to remove some harmful memes from the mind. -- Is this a strawman? Or is it a reasonable first approximation (which of course misses some important nuance)?
For me, getting these things right feels like I have an insight into how the organization actually works, on social level. Approximate descriptions are okay. If massaging someone’s left shoulder helps them overcome political mindkilling, and massaging someone’s right shoulder protects them from Roko’s Basilisk, don’t tell me! You have the NDA, and I don’t actually care about this level of detail. Keep your secret tech! I just want to understand the dynamic, like if someone talks to a stranger and later feels like the person may have cast some curse on them, the reasonable response is to schedule a massage.
From all descriptions I have read so far, yours felt the most helpful in this direction. Thank you!
My impression is that Leverage’s bodywork is something closer to what other people call “energy work,” which probably puts it… closer to Reiki than massage?
But I never had it done to me, and I don’t super understand it myself! Pretty low confidence in even this answer.
The observation might be correct but I don’t love the tone. It has some feeling of “haha, got you!” that doesn’t feel appropriate to these discussions.
Timeline Additions
I rather liked the idea of making a timeline!
Geoff currently had a short doc on timing of changes in org structure, but it currently doesn’t include much else.
Depending on how discussion here goes, I might transfer/transform this into its own post in the future. Will link them, if so.
Preamble
Nobody has talked much in public about the most dysfunctional things, yet? I am going to switch strategies out of dark-hinting and anonymity at this point, and put my cards down on the table.
This will be a sketch of the parts of this story that I know about. I do not have exact dates, and these are just broad-strokes of some of the key incidents here.
And not all of these are my story to tell? So sometimes, I will really only feel comfortable providing the broad-strokes.
(If someone has a better 2-3 sentence summary, or the full story, for some of these? Do chime in.)
These are each things I feel pretty solid about believing in. I think these incidents belong somewhere on any good consensus-timeline, but are not the full set of relevant events.
(I only have about 3-6 relevant contacts at the moment, but I’ve gotten at least 2 points of confirmation on each of these. It was not in this exact wording, though.)
Timeline Pieces
Early L1-present: Leverage has always had weird levels of PR-protective secret-keeping stuff, as far back as I can remember (~2015)
I believe this may have been true, since before they probably had anything worth hiding? Not confident in that, though.
Early L1: Leverage runs first EA Global (2013)
It brought a lot of important early EA people together, and a lot of them left it on friendlier terms
They also ran the 2014 one, but the character was pretty different; less “people crowded in a single house” more “conference + summit”
Early-L1: Panel discussion between Geoff & Anna Salamon (2014)
Brought up a few of Geoff’s non-materialist views
Seems to have been some sort of turning-point that deepened a rift between Geoff and Leverage vs the Rationalist and EA philosophies and communities
???: Sense of a developing rift between Leverage and EA
Started out relatively friendly, in first EAG era
Drifted apart due to a mix of philosophical differences and some growing anti-Leverage reputational dynamics
Geoff and other Leveragers were largely blocked from collaborations with EA orgs, and felt pretty shunned
Mid-L1: Attempt by a 1-2 Leverage-aligned people to use CEA as a Leverage recruitment vehicle. This escalated for a while, eventually became such a problem that they were rebuffed and fired.
Geoff was aware of this
Oli was pretty burned out, scared, and depressed during this period. The Leverage drama contributed to that, although it was not the sole reason.
I was dating Oli (habryka). Oli didn’t give me very much detail for a long time, but I could pick up on the fact that he was scared by this component of it, and he did tell me some of it eventually.
I have not seen Oli get this scared very often? So this does feel almost-personal for me, and I get pretty incensed about this.
Late L1: While worried about defunding, different factions of psychology did stuff towards each other that… bordered on psychological warfare?
Some nebulous mix of “actually fucking with each other,” “claiming/convincing-self that the other faction fucked with them,” and “trying to enforce unreasonable/unhealthy norms on each other.”
Different factions and splinter-groups often had different degrees and specifics of dysfunction.
(I’d like a more Kosher wording for this! But I’m not sure how else to express just how bad it got.)
Late L1/post-L1: At least one faction freaked the hell out about social contagion, and engaged in a lot of dysfunctional pressuring of others as a result
I happen to think that social contagion is a useful model? But that they failed to factor in enough “it all adds up to normality,” and got pretty arrogant about their models and personal competence, in a way that did not end well.
End of L1: Leverage 1.0 was dissolved
My personal take is that dissolving and restructuring was overall a good move
As much as I sympathize with the intent, I am not always a big fan of the legacy and specifics of the information-hiding agreement (I’ll pick that fight later, though.)
post-L1: A few non-Leverage people who were close to someone in one of the psychology factions, experienced some nasty second-degree fallout drama when their ex-Leverager friend started claiming they were {infected with objects, had attacked the Leverager with their aura, etc.}.
This is the short version of my story? I experienced one of these second-degree* echoes. See “My Story” below.
I reported the 10-second-version of my story to Anna and MattF in an anonymized text snippet, shortly after Zoe posted.
I have discovered others who were affected by some second-degree fallout drama, but the exact stories differ.
My Story
I was sworn into an intense secrecy agreement (“do not tell anyone, even if they will get hurt if they don’t know”), and told by a friend that I “contained an object” (“object” is basically their confusing term for a psychologically-unhealthy memetic thing). The Ziz/CFAR incident hit the same damn day. I responded by requesting an outside-view sanity-check from Oli**. I told my friend that I’d told someone, as soon as I got back, and shit hit the fan.
I sometimes call my incident the “quarantine-before-quarantine?” I responded to things like “getting told that I’m not allowed to email partner-of-friend because that gave partner an object” and “friend vents that they can’t visit friend’s-partner, because they caught an object from me, and now they have to wait until someone has an opening for about an hour of bodywork” and “friend says they felt me attack them, while I was just eating breakfast” by generating and following monotonically-increasing explicit rules-of-separation, until we were living in the same house but were not allowed to talk or even be in the same room together. We both moved out a month later. The whole thing is a long story, but basically, friend and I had a gigantic fallout as a result of all of this.
I was mum for a long time about everything except “I broke a major secrecy agreement” and “friend & I are not even able to calmly co-exist in the same house anymore,” because the friend had made it clear that talking honestly about any of this would render to them as “throwing them under the bus.” I do genuinely still care about their well-being.
If you know who I am talking about, do not reveal it publicly and please be nice to them. What they went through was even worse than what I experienced.
Same goes for friend’s-partner, who always struck me as swept-up and misguided, not malicious. They were the route by which this insane frame reached me, but I genuinely believe that they meant well. There were even times when the partner was more charitable towards me, than ex-friend was.
I distantly wish both of them well. I also do not wish to speak privately with either of them about this, at present.
On Centers of Dysfunction
In terms of clusters-of-dysfunction:
I think early-L1 was generally less-dysfunctional, although the culture had many of what I would think of as “risk-factors.”
The worst stuff seems to have reached a head right near the end of L1?
Reserve was one of the more-functional corners, and was basically fine.
I worked at Reserve for a while; you could pick up on some of the “taste of Leverage” from that distance, but it never appears to have escalated to anything seriously dysfunctional.
For example: my worst complaint is that C was weirdly-intense about not granting me access to the #general Slack channel, even though that could get in the way of doing my job sometimes
In general, what I’ve seen seems consistent with some ex-Leveragers getting a substantial amount of good out of the experience. Sometimes with none of the bad, sometimes alongside it.
ex: I recognize that bodywork was very helpful to my ex-friend, in working through some of their (unrelated) trauma. Many people have reported good experiences with belief-reporting, and say they found it useful.
I also think there were a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances.
It can be hard to judge this, especially from where I am? But as bad as the things that happened were, I think this is broadly true of most of the people involved.
I do not want to put people through the sort of mob-driven invalidation, that I once felt.
I was once friends with Brent. I still care about his well-being. There are times where I was under a lot of pressure to write that out of my personal narrative, but it was ultimately healthier for me that I chose to keep it.
I hope that those with stories about Leverage that are different from mine, feel the right to lay claim to the positives of their experiences, as well as the negatives.
Footnotes
* Technically, my friend was dating an ex-Leverager. So I actually got a third-degree burn.
** I told Oli something to the effect of “Mental illness as social contagion theory; claimed to be spread highly-effectively through circling. Not sure if Ziz incident may be an instance? If there’s another psychotic break within 1 month, boost likelihood of this being true. If there’s not, please update downward on this model.*** Pieces of Leverage’s model of social contagion did not match my own theory of social contagion, and I’m not entirely confident who is in the right, here? Also, this one may have come out of Leverage, but they say it was an accident.”
(...that is probably roughly everything I said? It was succinct, in part because I was taking the possibility that I had caught something, seriously. In my theory of social contagion, bandwidth really matters. Some Leveragers behaved in a way that implied thinking bandwidth mattered less, and this was one of the first things -of several- that struck me as insane about their lens on it.)
*** Ziz turned out to be already-crazy as a baseline. There were no psychosis episodes that month from anyone else. I asked around, and I do not believe Ziz had any strong connection to Leverage at all, but especially not in that time-period.
Threads Roundup
Several things under the LW Leverage Tag
Leverage Basic Facts EA Post & comment thread
I discovered this one a little late? Still flipping through it.
BayAreaHuman LW Post
By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
Someone, probably Geoff (it’s apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people’s positive experiences.
I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don’t, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
Matt Falshaw’s recent email mentioned non-Zoe people writing “disingenuous and deliberately misleading” posts in the past? If that was meant to implicate BAH, then I think it was being a bit “disingenuous and deliberately misleading.”
Zoe’s Medium Post
I buy it! I was willing to chime-in in its favor, from early on
Late-Leveragers seem to have conceded that it is a valid personal recounting
In case this changes location: LW comment thread on it
Geoff Anders Twitch Streams
Stream 1
Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
Audio was recovered, and there’s a transcript here of the second half for the less audio-inclined.
Geoff’s initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that “Leverage was poaching donors,” which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
Some decent higher-detail text summaries here and here.
TekhneMakre started a thread with some good additional thoughts, here
Some Press Releases from Leverage
A letter from the Executive Director on Negative Past Experiences: Sympathy, Transparency, and Support
Commits to:
“reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy” (w/ details)
“we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research”
Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
“Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict”
Overall, I found this one pretty heartening
Ecosystem Dissolution Agreement
The socially-enforced NDA-like from the end of Leverage 1.0
EDIT: For a sense of Leverage’s information-suppression policy in prior years, here is the Basic Information Management Checklist from 2017
Leverage 1.0 Ecosystem information sharing and initial inquiry
Email from Matt Falshaw on Oct 19
ETA: Essay On Intention Research
This essay seemed really well done, overall.
Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other’s nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
ETA: Public Report on Inquiry Findings: Factors and Mistakes that Contributed to a Range of Negative Experiences on Our 2011-2019 Research Collaboration
I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
I don’t think all of the problems were structural? But a lot of them were, and the ones that weren’t were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
Something I felt wasn’t handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.
I have some difficulty understanding the descriptions by former Leverage members. Inferential distance, but even if you tell me what the words refer to, I am not sure I am painting my near-mode picture correctly. Like, when you say “bodywork”, now I imagine something like one person giving the other person a massage, where both participants believe that this action not only relaxes the body, but also helps to remove some harmful memes from the mind. -- Is this a strawman? Or is it a reasonable first approximation (which of course misses some important nuance)?
For me, getting these things right feels like I have an insight into how the organization actually works, on social level. Approximate descriptions are okay. If massaging someone’s left shoulder helps them overcome political mindkilling, and massaging someone’s right shoulder protects them from Roko’s Basilisk, don’t tell me! You have the NDA, and I don’t actually care about this level of detail. Keep your secret tech! I just want to understand the dynamic, like if someone talks to a stranger and later feels like the person may have cast some curse on them, the reasonable response is to schedule a massage.
From all descriptions I have read so far, yours felt the most helpful in this direction. Thank you!
My impression is that Leverage’s bodywork is something closer to what other people call “energy work,” which probably puts it… closer to Reiki than massage?
But I never had it done to me, and I don’t super understand it myself! Pretty low confidence in even this answer.
Cult symptom! Invented terminology for invented, fictitious entities.
The observation might be correct but I don’t love the tone. It has some feeling of “haha, got you!” that doesn’t feel appropriate to these discussions.
Point taken, but I stand by the observation.