By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
Someone, probably Geoff (it’s apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people’s positive experiences.
I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don’t, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
Matt Falshaw’s recent email mentioned non-Zoe people writing “disingenuous and deliberately misleading” posts in the past? If that was meant to implicate BAH, then I think it was being a bit “disingenuous and deliberately misleading.”
Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
Audio was recovered, and there’s a transcript here of the second half for the less audio-inclined.
Geoff’s initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that “Leverage was poaching donors,” which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
Some decent higher-detail text summaries here and here.
TekhneMakre started a thread with some good additional thoughts, here
“reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy” (w/ details)
“we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research”
Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
“Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict”
Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other’s nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
I don’t think all of the problems were structural? But a lot of them were, and the ones that weren’t were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
Something I felt wasn’t handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.
Threads Roundup
Several things under the LW Leverage Tag
Leverage Basic Facts EA Post & comment thread
I discovered this one a little late? Still flipping through it.
BayAreaHuman LW Post
By now, I have been able to confirm every single concrete point made in that post seems true or reasonable, to myself or at least one of my contacts (not always two). The tone is slightly-aggressive, but seems generally truth-seeking, to me.
I think it leans more towards characterizing dysfunctional late-L1, than early-L1? But not strictly.
Someone, probably Geoff (it’s apparently the kind of thing he does, confirmed by 2+ people), sent out emails to friends of Leverage framing it as an unwarranted attack and encouraging flooding the comment thread with people’s positive experiences.
I do not like that he did this! I know someone else, who intends to write up something more thorough about this. But if they don’t, I am likely to comment on it myself, after saving evidence and articulating my thoughts.
EDIT: I do think a lot of the positive accounts are honest! I am not accusing any commenter of lying. My concern here is selective reporting, and something of a concentration of force dynamic that I believe may have been invoked deliberately in a way that I do not trust as truth-seeking.
Matt Falshaw’s recent email mentioned non-Zoe people writing “disingenuous and deliberately misleading” posts in the past? If that was meant to implicate BAH, then I think it was being a bit “disingenuous and deliberately misleading.”
Zoe’s Medium Post
I buy it! I was willing to chime-in in its favor, from early on
Late-Leveragers seem to have conceded that it is a valid personal recounting
In case this changes location: LW comment thread on it
Geoff Anders Twitch Streams
Stream 1
Included some relevant backstory on the rift with EA, which probably also belongs in a timeline.
Audio was recovered, and there’s a transcript here of the second half for the less audio-inclined.
Geoff’s initial twitch-stream (with Anna Salamon) included commentary about how Leverage used to be pretty friendly with EA, and ran the first EAG. Several EA founders felt pretty close after that, and then there was some pretty intense drifting apart (partially over philosophical differences?). There was also some sort of kerfuffle where a lot of people ended up with the frame that “Leverage was poaching donors,” which may have been unfair to Leverage. As time went on, Geoff and other Leveragers were largely blocked from collaborations, and felt pretty shunned.
Some decent higher-detail text summaries here and here.
TekhneMakre started a thread with some good additional thoughts, here
Some Press Releases from Leverage
A letter from the Executive Director on Negative Past Experiences: Sympathy, Transparency, and Support
Commits to:
“reimburse any employee of any organization in the Leverage research collaboration for expenditures they made on therapy” (w/ details)
“we will share information about intention research in the form of essays, talks, podcasts, etc., so as to give the public greater context on this area of our past research”
Sets up 4 intermediaries (to ease coming forward with accounts, in cases of distrust)
EDIT: Names Anna Salamon, Eli Tyre, Matthew Graves, and Matt Falshaw as several somewhat-intermediary people who can be contacted.
“Leverage Research will thus seek to resolve the current conflict as definitively as possible, publicly dissociate from the Rationalist community, and take actions to prevent future conflict”
Overall, I found this one pretty heartening
Ecosystem Dissolution Agreement
The socially-enforced NDA-like from the end of Leverage 1.0
EDIT: For a sense of Leverage’s information-suppression policy in prior years, here is the Basic Information Management Checklist from 2017
Leverage 1.0 Ecosystem information sharing and initial inquiry
Email from Matt Falshaw on Oct 19
ETA: Essay On Intention Research
This essay seemed really well done, overall.
Outlined the history of the research clearly. Seemed pretty good at sticking to fairly grounded descriptions, especially given the slipperyness of the subject matter. Tried to provide multiple hypotheses of what could be happening, and remained open to explanations nobody has come up with yet. This has been a tricky topic for people to describe, and I suspect he handled it well.
Mostly gives a history of Intention Research, a line of inquiry that started out poking at energywork and bodywork (directing attention with light touch), got increasingly into espousing detailed reads of each other’s nonverbals, and which eventually fed into some really awful interpersonal dynamics that got so bad that Leverage 1.0 was dissolved to diffuse it.
Warnings are at the end. My sole complaint with the writing is that I wish they were outlined earlier.
ETA: Public Report on Inquiry Findings: Factors and Mistakes that Contributed to a Range of Negative Experiences on Our 2011-2019 Research Collaboration
I thought this was quite good. Reading this raised my esteem for Matt Falshaw.
I do think this accurately characterized a lot of the structural problems, and leaves me more optimistic that Leverage 2.0 will avoid those. If you are interested in the details of that, I recommend reading it.
I don’t think all of the problems were structural? But a lot of them were, and the ones that weren’t were often exacerbated by structural things. Putting the focus on fixing things at that layer looks like a reasonable choice.
3-5 people with extremely negative experiences and perspectives, out of something like 45 people, does sound plausible to me.
Something I felt wasn’t handled perfectly: The refusal of people with largely-negative experiences, to talk with investigators, reads to me as some indicator of a feeling of past loss-of-trust or breach-of-trust. And while their absence is gestured at, I did feel like the significance of this tended to get downplayed more than I would have liked.