I want to note that this post (top-level) now has more than 3x the number of comments that Zoe’s does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that’s a more fair comparison), and that no one has commented on Zoe’s post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]
This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important.
I keep deleting sentences because I don’t think it’s productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.
I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at CFAR from mid-2017 to mid-2020. Someone very close to me previously worked for both CFAR and Leverage. With all that backing me up: I am really very confident that the psychological harm inflicted by Leverage was both more widespread and qualitatively different than anything that happened atCFAR or MIRI (at least since mid-2017; I don’t know what things might have been like back in, like, 2012).
The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!
CFAR and MIRI have their flaws, and several people clearly have legitimate grievances with them. I personally did not have a super great experience working for either organization (though that has nothing to do with anything Jessica mentioned in this post; just run-of-the-mill workplace stuff). Those flaws are worth looking at, not only for the edification of the people who had bad experiences with MIRI and CFAR, but also because we care about being good people building effective organizations to make the world a better place. They do not, however, belong in a conversation about the harm done by Leverage.
(Just writing a sentence saying that Leverage was harmful makes me feel uncomfortable, feels a little dangerous, but fuck it, what are they going to do, murder me?)
Again, I keep deleting sentences, because all I want to talk about is the depth of my agreement with Aella, and my uncharitable feelings towards this post. So I guess I’ll just end here.
It seems like it’s relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it’s not as easy to share them.
In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.
Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that’s largely not about leaving LW comments.
I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?
I think some of it has got to be that it’s somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.
I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.
I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.
(I think that seems potentially fair, and considerate. To me, it doesn’t feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)
...actually, let me give you a personal taste of what we’re dealing with?
The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.
Lost-friend says they were traumatized as a result of me doing this. That having “made the mistake of trusting me” hurt their relationships with other Leveragers. That at the time, they wished they’d lied to me, which stung.
I talked with the person I used as a sanity-check recently, and I get the sense that I still only managed to squeeze out ~3-5 sentences of detail at the time.
(I get the sense that I still did manage to convey a pretty balanced account of what was going through my head at the time. Somehow.)
It is probably safer to talk now, than it was then. At least, that’s my current view. 2 year’s distance, community support, a community that is willing to be more sympathetic to people who get swept up in movements, and a taste of what other people were going through (and that you weren’t the only person going through this), does tend to help matters.
(Edit: They’ve also shared the Ecosystem Dissolution Information
Arrangement, which I find a heartening move. They mention that it was intended to be more socially-enforced than legally-binding. I don’t like all of their framing around it, but I’ll pick that fight later.)
It wouldn’t surprise me at all, if most of this gets sorted out privately for now. Depending a bit on how this ends—largely on whether I think this kind of harm is likely to recur or not—I might not even have an objection to that.
But when it comes to Leverage? These are some of the kinds of thoughts and feelings, that I worry we may later see played a role in keeping this quiet.
I’m finally out about my story here! But I think I want to explain a bit of why I wasn’t being very clear, for a while.
I’ve been “hinting darkly” in public rather than “telling my full story” due to a couple of concerns:
I don’t want to “throw ex-friend under the bus,” to use their own words! Even friend’s Leverager partner (who they weren’t allowed to visit, if they were “infected with objects”) seemed more “swept-up in the stupidity” than “malicious.” I don’t know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.
Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there’s a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the face of the early intensely-negative community response to the Brent expose?
Surprisingly irrelevant for me: I am personally not very afraid of Geoff! Back when I was still a nobody, I brute-forced my way out of an agonizing amount of social-anxiety through sheer persistence. My social supports range both wide and deep. I have pretty strong honesty policies. I am not currently employed, so even attacking my workplace is a no-go. I’m planning to marry someone cool this January. Truth be told? I pity any fool who tries to character-assassinate me.
...but I know that others are scared of Geoff. I have heard the phrase “Geoff will do anything to win” bandied about so often, that I view it as something of a stereotyped phrase among Leveragers. I am honestly not sure how concerned I actually should be about it! But it feels like evidence of a narrative that I find pretty concerning, although I don’t know how this narrative emerged.
The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override a privacy concern*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely lost one of my friendships as a result.
Lost-friend says they were traumatized as a result of me doing this. That having “made the mistake of trusting me” hurt their relationships with other Leveragers. That at the time, they wished they’d lied to me which stung.
Any thoughts on why this was coming about in the culture?
If anyone feels that way (like the lost friend) and wants to talk to me about it, I’d be interested in learning more about it.
* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time.
This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this, with some additional pressure from an “agreement-to-secrecy,” and also factors in the meta-secrecy-agreements around “being able to be held to secrecy agreements” and “being honest about how well you can be held to secrecy agreements.”
No, I don’t feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review.
The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.
Yes; I want to acknowledge that there was a large cost here. (I wasn’t sure, from just the comment threads; but I just talked to a couple people who said they’d been thinking of writing up some observations about Leverage but had been distracted by this.)
I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I’ll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.
Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.
Otherwise, we risk having two debates running in parallel, interfering with each other.
The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!
Then it is good that this debate happened. (Despite my shock when I saw it first.) It’s just the timing with regards to the debate about Leverage that is unfortunate.
When everyone knows everyone else it’s more like Facebook than say Reddit. I don’t know why so many real life organizations are basing their discussions on these open forums online. Maybe they want to attract more people to think about certain problems. Maybe they want to spread their jeans. Either way, normal academic research don’t involving knocking on people’s doors and ask them if they are interested in doing such and such research. To a less extreme degree, they don’t even ask their family and friends to join their research circle. When you befriend your coworkers in the corporate world, things can get real messy real quick, depending on to what extent they are involved/interfering with your life outside of work. Maybe that’s why they are distinguishing themselves from your typical workplace.
MIRI and CFAR are non-profits, they need to approach fundraising and talent-seeking differently than universities or for-profit corporations.
In addition, neither of them is pure research institution. MIRI’s mission includes making people who work on AI, or make important decisions about AI, aware of the risks involved. CFAR’s mission includes teaching of rationality techniques. Both of them require communication with public.
This doesn’t explain all the differences, but at least some of them.
The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!
So much of this on this site, it’s incredible. Makes me wonder if people are consciously doing it. If they are, then why would they even join this cult in the first place? Personally I’ve observed that the people who easily join cults are rather very impressionable. Even my wife got duped by a couple of middle aged men. It’s a different type of intelligence and skill set than the stuff they employ at colleges and research institutions.
Uhh. Sadly, this attitude is quite common, so I will try to explain. Some people are in general more gullible or easier to impress, yes. But that is just a part of equation. The remaining parts are:
everyone is more vulnerable to manipulation that is compatible with their already existing opinions and desires;
people are differently vulnerable at different moment of their lives, so it’s a question of luck whether you encounter the manipulation at your strongest or weakest moment;
the environment can increase or decrease your resistance: how much free time you have, how many people make a coordinated effort to convince you, whether you have enough opportunity to meet other people or stay alone and reflect on what is happening, whether something keeps you worried and exhausted, etc.
So, some people might easily believe in Mother Gaia, but never in Artificial Intelligence, for other people it is the other way round. You can manipulate some people by appealing to their selfish desires, other people by appealing to their feelings of compassion.
Many people are just lucky that they never met a manipulative group targetting specifically their weaknesses, exactly at a vulnerable moment of their lives. It is easy to laugh at people whose weaknesses are different from yours, when they fail in a situation that exploits their weaknesses.
I want to note that this post (top-level) now has more than 3x the number of comments that Zoe’s does (or nearly 50% more comments than the Zoe+BayAreaHuman posts combined, if you think that’s a more fair comparison), and that no one has commented on Zoe’s post in 24 hours. [ETA: This changed while I was writing this comment. The point about lowered activity still stands.]
This seems really bad to me — I think that there was a lot more that needed to be figured out wrt Leverage, and this post has successfully sucked all the attention away from a conversation that I perceive to be much more important.
I keep deleting sentences because I don’t think it’s productive to discuss how upset this makes me, but I am 100% with Aella here. I was wary of this post to begin with and I feel something akin to anger at what it did to the Leverage conversation.
I had some contact with Leverage 1.0 — had some friends there, interviewed for an ops job there, and was charted a few times by a few different people. I have also worked for both CFAR and MIRI, though never as a core staff member at either organization; and more importantly, I was close friends with maybe 50% of the people who worked at CFAR from mid-2017 to mid-2020. Someone very close to me previously worked for both CFAR and Leverage. With all that backing me up: I am really very confident that the psychological harm inflicted by Leverage was both more widespread and qualitatively different than anything that happened at CFAR or MIRI (at least since mid-2017; I don’t know what things might have been like back in, like, 2012).
The comments section of this post is full of CFAR and MIRI employees attempting to do collaborative truth-seeking. The only comments made by Leverage employees in comparable threads were attempts at reputation management. That alone tells you a lot!
CFAR and MIRI have their flaws, and several people clearly have legitimate grievances with them. I personally did not have a super great experience working for either organization (though that has nothing to do with anything Jessica mentioned in this post; just run-of-the-mill workplace stuff). Those flaws are worth looking at, not only for the edification of the people who had bad experiences with MIRI and CFAR, but also because we care about being good people building effective organizations to make the world a better place. They do not, however, belong in a conversation about the harm done by Leverage.
(Just writing a sentence saying that Leverage was harmful makes me feel uncomfortable, feels a little dangerous, but fuck it, what are they going to do, murder me?)
Again, I keep deleting sentences, because all I want to talk about is the depth of my agreement with Aella, and my uncharitable feelings towards this post. So I guess I’ll just end here.
It seems like it’s relatively easy for people to share information in the CFAR+MIRI conversation. On the other hand, for those people who have actually the most central information to share in the Leverage conversation it’s not as easy to share them.
In many cases I would expect that private in person conversation are needed to progress the Leverage debate and that just takes time. Those people at leverage who want to write up their own experience likely benefit from time to do that.
Practically, helping Anna get an overview over timeline of members and funders and getting people to share stories with Aella seems to be the way going forward that’s largely not about leaving LW comments.
I agree with the intent of your comment mingyuan, but perhaps the reason for the asymmetry in activity on this post is simply due to the fact that there are an order of magnitude (or several orders of magnitude?) more people with some/any experience and interaction with CFAR/MIRI (especially CFAR) compared to Leverage?
I think some of it has got to be that it’s somehow easier to talk about CFAR/MIRI, rather than a sheer number of people thing. I think Leverage is somehow unusually hard to talk about, such that maybe we should figure out how to be extraordinarily kind/compassionate/gentle to anyone attempting it, or something.
I agree that Leverage has been unusually hard to talk about bluntly or honestly, and I think this has been true for most of its existence.
I also think the people at the periphery of Leverage, are starting to absorb the fact that they systematically had things hidden from them. That may be giving them new pause, before engaging with Leverage as a topic.
(I think that seems potentially fair, and considerate. To me, it doesn’t feel like the same concern applies in engaging about CFAR. I also agree that there were probably fewer total people exposed to Leverage, at all.)
...actually, let me give you a personal taste of what we’re dealing with?
The last time I choose to talk straightforwardly and honestly about Leverage, with somebody outside of it? I had to hard-override an explicit but non-legal privacy agreement*, to get a sanity check. When I was honest about having done so shortly thereafter, I completely and permanently lost one of my friendships as a result.
Lost-friend says they were traumatized as a result of me doing this. That having “made the mistake of trusting me” hurt their relationships with other Leveragers. That at the time, they wished they’d lied to me, which stung.
I talked with the person I used as a sanity-check recently, and I get the sense that I still only managed to squeeze out ~3-5 sentences of detail at the time.
(I get the sense that I still did manage to convey a pretty balanced account of what was going through my head at the time. Somehow.)
It is probably safer to talk now, than it was then. At least, that’s my current view. 2 year’s distance, community support, a community that is willing to be more sympathetic to people who get swept up in movements, and a taste of what other people were going through (and that you weren’t the only person going through this), does tend to help matters.
(Edit: They’ve also shared the Ecosystem Dissolution Information Arrangement, which I find a heartening move. They mention that it was intended to be more socially-enforced than legally-binding. I don’t like all of their framing around it, but I’ll pick that fight later.)
It wouldn’t surprise me at all, if most of this gets sorted out privately for now. Depending a bit on how this ends—largely on whether I think this kind of harm is likely to recur or not—I might not even have an objection to that.
But when it comes to Leverage? These are some of the kinds of thoughts and feelings, that I worry we may later see played a role in keeping this quiet.
I’m finally out about my story here! But I think I want to explain a bit of why I wasn’t being very clear, for a while.
I’ve been “hinting darkly” in public rather than “telling my full story” due to a couple of concerns:
I don’t want to “throw ex-friend under the bus,” to use their own words! Even friend’s Leverager partner (who they weren’t allowed to visit, if they were “infected with objects”) seemed more “swept-up in the stupidity” than “malicious.” I don’t know how to tell my truth, without them feeling drowned out. I do still care about that. Eurgh.
Via models that come out of my experience with Brent: I think this level of silence, makes the most sense if some ex-Leveragers did get a substantial amount of good out of the experience (sometimes with none of the bad, sometimes alongside it), and/or if there’s a lot of regrettable actions taken by people who were swept up in this at the time, by people who would ordinarily be harmless under normal circumstances. I recognize that bodywork was very helpful to my friend, in working through some of their (unrelated) trauma. I am more than a little reluctant to put people through the sort of mob-driven invalidation I felt, in the face of the early intensely-negative community response to the Brent expose?
Surprisingly irrelevant for me: I am personally not very afraid of Geoff! Back when I was still a nobody, I brute-forced my way out of an agonizing amount of social-anxiety through sheer persistence. My social supports range both wide and deep. I have pretty strong honesty policies. I am not currently employed, so even attacking my workplace is a no-go. I’m planning to marry someone cool this January. Truth be told? I pity any fool who tries to character-assassinate me.
...but I know that others are scared of Geoff. I have heard the phrase “Geoff will do anything to win” bandied about so often, that I view it as something of a stereotyped phrase among Leveragers. I am honestly not sure how concerned I actually should be about it! But it feels like evidence of a narrative that I find pretty concerning, although I don’t know how this narrative emerged.
Any thoughts on why this was coming about in the culture?
If anyone feels that way (like the lost friend) and wants to talk to me about it, I’d be interested in learning more about it.
* I could tell that this had some concerning toxic elements, and I needed an outside sanity-check. I think under the circumstances, this was the correct call for me. I do not regret picking the particular person I chose as a sanity-check. I am also very sympathetic to other people not feeling able to pull this, given the enormous cost to doing it at the time.
This is not a strong systematic assessment of how I usually treat privacy agreements. My harm-assessment process is usually structured a bit like this, with some additional pressure from an “agreement-to-secrecy,” and also factors in the meta-secrecy-agreements around “being able to be held to secrecy agreements” and “being honest about how well you can be held to secrecy agreements.”
No, I don’t feel like having a long discussion about privacy policies right now. But if you care? My thoughts on information-sharing policy were valuable enough to get me into the 2019 Review.
If you start on this here, I will ignore you.
The fact that the people involved apparently find it uniquely difficult to talk about is a pretty good indication that Leverage != CFAR/MIRI in terms of cultishness/harms etc.
Yes; I want to acknowledge that there was a large cost here. (I wasn’t sure, from just the comment threads; but I just talked to a couple people who said they’d been thinking of writing up some observations about Leverage but had been distracted by this.)
I am personally really grateful for a bunch of the stuff in this post and its comment thread. But I hope the Leverage discussion really does get returned to, and I’ll try to lend some momentum that way. Hope some others do too, insofar as some can find ways to actually help people put things together or talk.
Seems to me that, given the current situation, it would probably be good to wait maybe two more days until this debate naturally reaches the end. And then restart the debate about Leverage.
Otherwise, we risk having two debates running in parallel, interfering with each other.
Then it is good that this debate happened. (Despite my shock when I saw it first.) It’s just the timing with regards to the debate about Leverage that is unfortunate.
When everyone knows everyone else it’s more like Facebook than say Reddit. I don’t know why so many real life organizations are basing their discussions on these open forums online. Maybe they want to attract more people to think about certain problems. Maybe they want to spread their jeans. Either way, normal academic research don’t involving knocking on people’s doors and ask them if they are interested in doing such and such research. To a less extreme degree, they don’t even ask their family and friends to join their research circle. When you befriend your coworkers in the corporate world, things can get real messy real quick, depending on to what extent they are involved/interfering with your life outside of work. Maybe that’s why they are distinguishing themselves from your typical workplace.
MIRI and CFAR are non-profits, they need to approach fundraising and talent-seeking differently than universities or for-profit corporations.
In addition, neither of them is pure research institution. MIRI’s mission includes making people who work on AI, or make important decisions about AI, aware of the risks involved. CFAR’s mission includes teaching of rationality techniques. Both of them require communication with public.
This doesn’t explain all the differences, but at least some of them.
So much of this on this site, it’s incredible. Makes me wonder if people are consciously doing it. If they are, then why would they even join this cult in the first place? Personally I’ve observed that the people who easily join cults are rather very impressionable. Even my wife got duped by a couple of middle aged men. It’s a different type of intelligence and skill set than the stuff they employ at colleges and research institutions.
Uhh. Sadly, this attitude is quite common, so I will try to explain. Some people are in general more gullible or easier to impress, yes. But that is just a part of equation. The remaining parts are:
everyone is more vulnerable to manipulation that is compatible with their already existing opinions and desires;
people are differently vulnerable at different moment of their lives, so it’s a question of luck whether you encounter the manipulation at your strongest or weakest moment;
the environment can increase or decrease your resistance: how much free time you have, how many people make a coordinated effort to convince you, whether you have enough opportunity to meet other people or stay alone and reflect on what is happening, whether something keeps you worried and exhausted, etc.
So, some people might easily believe in Mother Gaia, but never in Artificial Intelligence, for other people it is the other way round. You can manipulate some people by appealing to their selfish desires, other people by appealing to their feelings of compassion.
Many people are just lucky that they never met a manipulative group targetting specifically their weaknesses, exactly at a vulnerable moment of their lives. It is easy to laugh at people whose weaknesses are different from yours, when they fail in a situation that exploits their weaknesses.