I just posted an update on behalf of Leverage Research to LessWrong along with an invite to an AMA with Leverage Research next weekend, as it seems from the comments that there isn’t a lot of common knowledge about our current work or other aspects of our history. I encourage people to read this for additional context, and I hope the OP will be able to update this post to incorporate some of that.
I also want to briefly address some of the items raised here.
Information management policies
Leverage Research has for a long time been concerned about the potential negative consequences of the potential misuse of knowledge garnered from research. These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.
Starting in 2012, Leverage Research had an information management policy designed to prevent negative intended consequences from the premature dissemination of information. Our information policy from 2012-2016 required permission for the release of longform information on the internet. We had an information approval team, with most information release requests being approved. In 2016, the policy was revised, in part to give blanket permission for the sharing of longform information on the internet unrelated to their work at Leverage. Based on our ED’s recollection, in no case was permission withheld for the online publication of regular personal information.
Our information management policy aimed to balance simplicity and usability with effect and certainly did not get everything right. One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended. We intend to learn from this experience and do better with information management in the future.
Dangers and harms from psychological practices
As mentioned in my update post, we are very concerned about potential harms to individuals from experimenting with psychological tools and so—when we begin to distribute some of these tools to the public—we will include in the release descriptions of the wide variety of potential near-term and long-term dangers from psychological experimentation that we are aware of.
The post mentions “hearing people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.” We believe that the tools we will release to the public, including Belief Reporting and basic charting, are generally safe, and will do our best to alert people to the potential dangers.
If anyone has experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or Belief Reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.
Dating policies
From 2011–2019, Leverage did not focus on the development of standard professional norms or policies. We had an employee handbook covering equal employment opportunity, sexual and other forms of harassment, company conduct, and complaints procedures, but had no policy (and still have no policy) on who should date whom. It is true that our Executive Director had three long-term consensual relationships with women employed by Leverage Research or affiliated organizations during their history. Managing the potential for abuses by those in positions of power is very important to us. If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com.
Following 2019, Leverage began to prioritize the development of professional standards. We expect to develop policies on dating in the workplace and other topics as part of this effort. As the HR representative at Leverage Research, developing these standards further is my responsibility.
Charting/debugging was always optional
The post claims that “Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory “trainer”, and to “train” other members.” This is inaccurate, and I’m not sure how this misunderstanding could have occurred.
Neither charting nor debugging was ever required of any person at any time at Leverage, either as part of their work or prior to being hired as part of the hiring process. Many individuals chose to be charted because of their interest in it, either for work or self-improvement-related reasons. But charting — as well as other psychological interventions—were and should remain strictly voluntary. Individuals who were uninterested in charting could have (and did) study other topics.
I hope this, along with my LessWrong forum post, helps to answer some of the questions and concerns raised here. If you have any questions not answered by this comment, my post, or other materials online please feel free to email me (larissa@leverageresearch.org) or join us at our virtual office next weekend.
I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.
When you asked if you could confidentially send me a draft of your post about Will’s book to check, I said yes.
The next week you sent me a couple more emails with different versions of the draft. When I saw that the draft was 18 pages of technical material, I realized I wasn’t going to be a good person to review it. That’s when I forwarded to someone on Will’s team asking if they could look at it instead of me.
I should never have done that, because your original email asked me not to share it with anyone. For what it’s worth, the way that this happened is that when I was deciding what to do with the last email in the chain, I didn’t remember and didn’t check that the first email in the chain requested confidentiality. This was careless of me, and I’m very sorry about it.
I think the underlying mistake I made was not having this kind of situation flagged as sensitive in my mind, which contributed to my forgetting the original confidentiality request. If the initial email had been about some more personal situation, I am much more sure it would have been flagged in my mind as confidential. But because this was a critique of a book, I had it flagged as something like “document review” in my mind. This doesn’t excuse my mistake—and any breach of trust is a serious problem given my role—but I hope it helps show that it wasn’t intentional.
I now try to be much more careful about situations where I might make a similar mistake.
Personally, I don’t really blame you or think less of you for this screwup. I never got the impression that you are the sort of person who should be sent confidential book review drafts. Maybe you’d disagree, but that seems like a misunderstanding of your role to me.
It seemed clear to me that you made yourself available to confidential reports regarding conflict, abuse, and community health. Not disagreements with a published book. It makes sense that you didn’t have a habit of mentally flagging those emails as confidential.
Regardless, I trust that you’ve been more careful since then, and I appreciate how clearly you own up to this mistake.
I want to offer my +1 that I strongly believe Julia’s trustworthy for reports regarding Leverage.
It makes sense that you didn’t have a habit of mentally flagging those emails as confidential.
I would generally expect that if I give someone access to a draft of any kind and they want to forward it to someone else to put the author of the draft in the CC. Even of the absence of the promise of confidentiality, I consider sharing someone’s draft without their permission and witholding the information that you share it bad behavior.
This doesn’t excuse my mistake—and any breach of trust is a serious problem given my role—but I hope it helps show that it wasn’t intentional.
Of course it doesn’t show that it wasn’t intentional to say “my mistake wasn’t intentional but accidental”. The only thing that shows that it wasn’t intentional would be to take actual consequences that are meaningful enough so that it doesn’t look like you benefited CEA with your mistake.
Saying “I’m sorry I broke your trust” without engaging in any consequences for it feels cheap. To me such a mistake feels like you owe something to guzey.
One thing you could have done if you actually cared would have been to advocate for guzey in this exchange even if that goes against your personal positions.
Only admitting the mistake at comments and not in a more visible manner also doesn’t feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes
Only admitting the mistake at comments and not in a more visible manner also doesn’t feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes
For what it’s worth, I do think this is probably a serious enough mistake to go on this page.
Wow, that is very bad. Personally I’d still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn’t.
One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended.
Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?
Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.
Dating policies
Thank you for the clarity here.
Charting/debugging was always optional
This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.
It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way.
If you are interested in being involved in the beta testing of the starter pack, or if you have experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or belief reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.
If there are particular people who feel that they have been damaged, it would be great to still have a way that the information reaches Leverage. Maybe, a third-party could be found to mediate the conversation?
Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
Why is this getting downvotes? It’s a constructive comment containing a good idea (mediation to address concerns) and pointing at a source of transparency, which everyone here has been asking for.
I’m not a rationalist, and I’m new to actually saying anything on LW (despite lurking for 4ish years now—and yes, I made this alt today), but it seems like this would be the type of community to be more open-minded about a topic than what I’m seeing. By “what I’m seeing” I mean people are just throwing rocks and being unwilling to find any way to work with someone who’s trying to address the concerns of the OP and commenters.
I didn’t downvote ChristianKI’s comment, but I feel like it’s potentially a bit naive.
>Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
In my view, the question isn’t so much about whether they genuinely don’t want harms to happen (esp. because harming people psychologically often isn’t even good for growing the organization, not to mention the reputational risks). I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.
Instead, what I’d be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment. To ascertain those things, one needs to go beyond looking at stated intentions. “Person/organization says nice-sounding thing, so they seem genuinely concerned about nice aims, therefore stop being so negative” is a really low bar and probably leads to massive over-updating in people who are prone to being too charitable.
I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.
I didn’t argue that it says something about good intentions. My main argument is that it’s useful to cooperate with Leverage on releasing their techniques with the safety warnings that are warrented given past problems instead of not doing that which increases the chances that people will use the techniques in a way that messes them up.
I do consider belief reporting to be a very valuable invention and I think that it’s plausible that this is true for more of what leverage produced. I do see that a technique like belief reporting allows for doing scientific experiments that weren’t possible before.
Information gathered from the experiments already run can quite plausible help other people from encoutering harm when integrated in the starter kit that they develop.
Instead, what I’d be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment.
I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.
For example, I think it’s relatively easy to verify that Leverage is a radically different organization today. The costly investments we’ve made in history of science research provide the clearest example as does the fact that we’re no longer pursuing any new psychological research.
I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.
If you hover over the karma counter, you can see that the comment is sitting at −2 with 12 votes, which means that there is a significant disagreement on how to judge it, not agreement that it should go away.
(It makes some sense to oppose somewhat useful things that aren’t as useful as they should be, or as safe as they should be, I think that is the reason for this reaction. And then there is the harmful urge to punish people who don’t punish others, or might even dare suggest talking to them.)
I’d rather not say, for the sake of my anonymity—something which is important to me because this:
However, I would also like to note that Leverage 1.0 has historically been on the receiving end of substantial levels of bullying, harassment, needless cruelty, public ridicule, and more by people who were not engaged in any legitimate epistemic activity. I do not think this is OK. I intend to call out this behavior directly when I see it. I would ask that others do so as well.
is a real concern. I’ve seen it firsthand—people associated with Leverage being ostracized, bullied, and made to feel very unwelcome and uncomfortable at social events and in online spaces by people in nearby communities, including this one.
It seems like a real risk to me that any amount of personal information I give will be used to discover my identity, and I’ll be subject to the same.
Which, by the way, is despicable, and I find it alarming that only one person (besides Kerry) in this thread has acknowledged this behavior pattern.
I said in another comment that I didn’t make an alt to come here and “defend Leverage”—this instance is the exception to that. These people are human beings.
If people are being bullied, that’s extremely bad, and if you see that and call it out you’re doing a noble thing.
But all I’ve seen in this thread—I can’t comment on e.g. what happens in person in the Bay Area, since that’s thousands of miles away from where I am—is people saying negative things about Leverage Research itself and not about individuals associated with it, with the single exception of the person in charge of Leverage, who fairly credibly deserves some criticism if the negative things being said about the organization are correct.
Bullying people is cruel and harmful. I’m not so sure there’s anything wrong with “bullying” an organization. Especially if that organization is doing harm, or if there is good reason to think it is likely to do harm in the future.
I’ve seen someone from a different org, but with a similar valence in the community, get treated quite poorly at a party when they let their association be known. It was like the questioner stopped seeing them as a person with feelings and only treated them as an extension of the organization. I felt gross watching it and regret not saying anything at the time.
It seems overwhelmingly likely to me that Leveragers faced the same thing, and also that some members lumped some legitimate criticisms or refusals to play along in with this unacceptable treatment, because that’s a human thing to do.
ETA: I talked to the person in question and they don’t remember this, so apparently it made a bigger emotional impression on me than them (they remembered a different convo at the same event that seemed like the same kind of thing, but didn’t report it being particularly unpleasant). I maintain that if I were regularly subject to what I saw it would have been quite painful, and imagine that to be true for at least some other people.
I’m not so sure there’s anything wrong with “bullying” an organization.
There’s a pragmatic question of building reliable theory of what’s going on, which requires access to the facts. Even trivial inconvenience for those who have the facts in communicating them does serious damage to this process’s ability to understand what’s going on.
The most valuable facts are those that contradict the established narrative of the theory, they can actually be relevant for improving it, for there is no improvement without change. Seeing a narrative that contradicts the facts someone has is already disheartening, so everything else that could possibly be done to make sharing easier, and not make it harder, should absolutely be done.
Yes, but imagine for a second that you worked at Leverage, and you’re reading this thread (noting that I’d be surprised if several people from both 1.0 and 2.0 were not). Do you think that, whether they had a negative experience or a positive experience, they would feel comfortable commenting on that here?
(This is the relevant impact of the things mentioned in my previous comment.)
No. Of course not. Because the overpowering narrative in this thread, regardless of the goals or intentions of the OP, is “Leverage was/is a cult”.
No one accused of being in a cult is going to come into the community of their accusers and say a word. Of course, with the exception of two people in 2.0 who have posted here, one of which is a representative who has been accused of plotting to coerce and manipulate victims, and the other of which has been falsely accused of trying to hide their identity in the thread.
And this is despite Leverage’s efforts to become more legible and transparent.
If someone who worked there had negative experiences as a result, then, of course, they may not want to post publicly in an environment where the initiative that they once put their time, energy, and effort into is being so highly criticized, and in some cases, again, blatantly accused of being a literal cult or what I would call a “strawman’s term” for a cult. They also may not want to air their concerns with their ex-employers in this public setting.
And on the other hand, if someone who worked there had positive experiences, they are left to watch as, once again, the discourse of this group disallows them from giving input without figuratively burning them at the stake for supporting something that they personally experienced and had no issue with.
And these are just the first few things that came to mind for me when considering why they may not be present in this conversation.
My main concern here is that this space doesn’t allow them to speak AT ALL without serious repercussions, and that is caused by the pattern I mentioned in my comment above. Because of this, the discourse around Leverage Research on this thread (while there has still been new information exchanged, and I do not want to discount that) is doomed to be an echo chamber between people who are degrees (plural) away from whatever the truth may be.
This is my takeaway from this entire thread, and it’s a shame.
(Sorry for using the words “of course” “accused”/”accusers” etc so frequently—I am tired.)
I don’t know how comfortable any given person would feel commenting here. I do know that Kerry Vaughan, who is with Leverage now, has evidently felt comfortable enough to comment. I have no idea who you are but it seems fairly apparent that you have some association with Leverage, and you evidently feel comfortable enough to comment.
You say that one of those people (presumably meaning Kerry) “has been accused of plotting to coerce and manipulate victims”. I can’t find anywhere where anyone has made any such accusation. I can’t find any instance of “coerce” or any of its forms other than in your comment above. I find two other instances of “manipulate” and related words; one is specifically about Geoff Anders (who so far as I know is not the same person as Kerry Vaughan) and the other is talking generally about psychological manipulation and doesn’t make any accusations about specific people.
You say that the other person (presumably meaning you) “has been falsely accused of trying to hide their identity”, but so far as I can make out you are openly trying to hide your identity (on the grounds that if people could tell who you are then you would be mistreated on account of being associated with Leverage).
(I have to say that I’m a bit confused by the anonymity thing. Are you concerned that if you were onymous then people “in real life” would read what you say here, realise that you’re associated with Leverage, and mistreat you? Or that if you were onymous then people here would recognize your name, realise that you’re associated with Leverage, and mistreat you? Or something else? The first would make sense only if “in real life” you were concealing whatever associations you have with Leverage, which I have to say would itself be a bit concerning; the second would make sense only if knowing your name would make people in this thread think you more closely associated with Leverage than they already think you, and unless you’re Literal Geoff Anders or something that seems a little unlikely. And I’m not sure what “something else” might be.)
Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn’t an accusation. Not at the person in question, anyway. For sure it’s the sort of thing that many people will find uncomfortable. But what’s uncomfortable here is the content of the claim itself, no? So what less-bullying thing would you prefer someone to do, if they are concerned that an organization other people around them might join is worryingly cult-like? Should they just not say anything, because saying “X is cult-like” is bullying? That policy means never being able to give warning to people who might be getting ensnared by an actual cult. What’s the alternative?
Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn’t an accusation. Not at the person in question, anyway.
“You are in a cult” is absolutely an accusation directed at the person. I can understand moral reasons why someone might wish for a world in which people assigned blame differently, and technical reasons why this feature of the discourse makes purely descriptive discussions unhelpfully fraught, but none of that changes the empirical fact that “You are in a cult” functions as an accusation in practice, especially when delivered in a public forum. I expect you’ll agree if you recall specific conversations-besides-this-one where you’ve heard someone claim that another participant is in a cult.
Maybe you’re right. So, same question as for ooverthinkk: suppose you think some organization that people you know belong to is a cult, or has some of the same bad features as cults. What should you do?
(It seems to me that ooverthinkk feels that at least some of what is being said in this thread about Leverage is morally wrong, and I hope there’s some underlying principle that’s less overreaching than “never say that anything is cult-like” and less special-pleading than “never say bad things about Leverage”—but I don’t yet understand what that underlying principle is.)
The first person was Larissa, the second person was Kerry.
The “anonymity thing” does not fall under the first category. I’d just prefer, as I stated before, not to be targeted “in real life” for my views on this thread.
The “bullying” that I’m referring to happened/happens outside of this thread, and is in no way limited to instances of people being accused of being “in a cult”.
D’oh! I’d forgotten that Larissa had commented here too. My apologies.
As I’ve said, I have no knowledge of any bullying that may or may not be occurring elsewhere (especially in person in the Bay Area), and if anyone’s getting bullied then that’s bad. If that isn’t common knowledge, then there’s a problem. But the things in this thread that you’ve taken exception to don’t seem to me to come close to bullying. (Obviously, though, they could be part of a general pattern of excessive hostility to all things Leverage.)
Do you think OP was wrong to post what they did? If so, is that because you think the things they’ve said about Leverage are factually wrong, or because you think people who think they see an organization behaving in potentially harmful ways shouldn’t say so, or what?
If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org.
Bullshit. This is not how you prevent abuse of power. This is how you cover it up.
Let’s use some common sense here, please. If—hypothetically speaking—some organization abuses people, what is the most likely consequence if the victim e-mails in confidence their PR person?
My model says, the PR person will start working on a story that protects the organization, with the advantage that the PR person can publish their version before the victim does. (There are also other options, such as threatening the victim, which wouldn’t be available if the victim told their story to someone else first.)
These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.
Social science infohazards are not a thing because they must be implemented by an organization to work and organizations leak like a sieve. Even nuclear secrets leak. This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you’re doing is the opposite of science.
This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you’re doing is the opposite of science.
Interestingly, “peer review” occurs pretty late in the development of scientific culture. It’s not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820.
What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It’s an open question whether Leverage 1.0 violated this norm. On the one hand, they had a quite vibrant and open culture around their findings internally and did seek out others who might have something to offer to their project. On the other hand, they certainly didn’t make any of this easily accessible to outsiders. I’m inclined to think they violated some scientific norms in this regard, but I think the work they were doing is pretty clearly science albeit early stage science.
I want to draw attention to the fact that “Kerry Vaughan” is a brand new account that has made exactly three comments, all of them on this thread. “Kerry Vaughan” is associated with Leverage. “Kerry Vaughan”’s use of “they” to describe Leverage is deliberately misleading.
If “it’s not unscientific because it merely takes science back 200-400 years” is the best defense that LEVERAGE ITSELF can give for its own epistemic standards then any claims it has to scientific rigor are laughable. 1600 was the time of William Shakespeare.
Edit: I’m not saying that science in 1600 was laughable. I’m saying that performing 1600-style science today is laughable.
I want to draw attention to the fact that “Kerry Vaughan” is a brand new account that has made exactly three comments, all of them on this thread. “Kerry Vaughan” is associated with Leverage. “Kerry Vaughan”’s use of “they” to describe Leverage is deliberately misleading.
I’m not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used “we” in connection with a link to Leverage’s case studies. I used “they” to refer to Leverage 1.0 since I didn’t work at Leverage during that time.
I want to draw attention to the fact that “Kerry Vaughan” is a brand new account that has made exactly three comments, all of them on this thread. “Kerry Vaughan” is associated with Leverage. “Kerry Vaughan”’s use of “they” to describe Leverage is deliberately misleading.
To be fair, KV was open about that association in both previous comments, using ‘we’ in the first and including this disclaimer in the second --
(I currently work at Leverage research but did not work at Leverage during Leverage 1.0 (although I interacted with Leverage 1.0 and know many of the people involved). Before working at Leverage I did EA community building at CEA between Summer 2014 and early 2019.)
-- which also seems to explain the use of ‘they’ in KV’s third comment, which referred specifically to “Leverage 1.0”.
(I hope this goes without saying on LW, but I don’t mean this as a general defense of Leverage or of KV’s opinions. I know nothing about either beyond what I’ve read here, and I haven’t even read all the relevant comments. Personally I wouldn’t get involved with an organisation like Leverage.)
Hi BayAreaHuman,
I just posted an update on behalf of Leverage Research to LessWrong along with an invite to an AMA with Leverage Research next weekend, as it seems from the comments that there isn’t a lot of common knowledge about our current work or other aspects of our history. I encourage people to read this for additional context, and I hope the OP will be able to update this post to incorporate some of that.
I also want to briefly address some of the items raised here.
Information management policies
Leverage Research has for a long time been concerned about the potential negative consequences of the potential misuse of knowledge garnered from research. These concerns are widely shared for research in the hard sciences (e.g., nuclear physics), but are valid as well for the social sciences.
Starting in 2012, Leverage Research had an information management policy designed to prevent negative intended consequences from the premature dissemination of information. Our information policy from 2012-2016 required permission for the release of longform information on the internet. We had an information approval team, with most information release requests being approved. In 2016, the policy was revised, in part to give blanket permission for the sharing of longform information on the internet unrelated to their work at Leverage. Based on our ED’s recollection, in no case was permission withheld for the online publication of regular personal information.
Our information management policy aimed to balance simplicity and usability with effect and certainly did not get everything right. One of the negative consequences of our information policy, as we have learned, is the way it made some regular interactions with people outside of the relevant information circles more difficult than intended. We intend to learn from this experience and do better with information management in the future.
Dangers and harms from psychological practices
As mentioned in my update post, we are very concerned about potential harms to individuals from experimenting with psychological tools and so—when we begin to distribute some of these tools to the public—we will include in the release descriptions of the wide variety of potential near-term and long-term dangers from psychological experimentation that we are aware of.
The post mentions “hearing people who did lots of charting within Leverage report that it led to dissociation and fragmentation, that they have found difficult to reverse.” We believe that the tools we will release to the public, including Belief Reporting and basic charting, are generally safe, and will do our best to alert people to the potential dangers.
If anyone has experienced negative effects from psychological experimentation, including with rationality training, meditation, circling, Focusing, IFS, Leverage’s charting or Belief Reporting tools (or word-of-mouth copies of these tools), or similar techniques please do reach out to us at contact@leverageresearch.org. We are keen to gain as much information as possible on the harms and dangers as we prepare to release our psychology research.
Dating policies
From 2011–2019, Leverage did not focus on the development of standard professional norms or policies. We had an employee handbook covering equal employment opportunity, sexual and other forms of harassment, company conduct, and complaints procedures, but had no policy (and still have no policy) on who should date whom. It is true that our Executive Director had three long-term consensual relationships with women employed by Leverage Research or affiliated organizations during their history. Managing the potential for abuses by those in positions of power is very important to us. If anyone is aware of harms or abuses that have taken place involving staff at Leverage Research, please email me, in confidence, at larissa@leverageresearch.org or larissa.e.rowe@gmail.com.
Following 2019, Leverage began to prioritize the development of professional standards. We expect to develop policies on dating in the workplace and other topics as part of this effort. As the HR representative at Leverage Research, developing these standards further is my responsibility.
Charting/debugging was always optional
The post claims that “Members who were on payroll were expected to undergo charting/debugging sessions with a supervisory “trainer”, and to “train” other members.” This is inaccurate, and I’m not sure how this misunderstanding could have occurred.
Neither charting nor debugging was ever required of any person at any time at Leverage, either as part of their work or prior to being hired as part of the hiring process. Many individuals chose to be charted because of their interest in it, either for work or self-improvement-related reasons. But charting — as well as other psychological interventions—were and should remain strictly voluntary. Individuals who were uninterested in charting could have (and did) study other topics.
I hope this, along with my LessWrong forum post, helps to answer some of the questions and concerns raised here. If you have any questions not answered by this comment, my post, or other materials online please feel free to email me (larissa@leverageresearch.org) or join us at our virtual office next weekend.
I would suggest that anything in this vein should be reported to Julia Wise, as I believe she is a designated person for reporting concerns about community health, harmful behaviours, abuse, etc. She is unaffiliated with Leverage, and is a trained social worker.
(deleted)
This was indeed a big screwup on my part. Again, I’m really sorry I broke your trust.
To add detail about my mistake:
When you asked if you could confidentially send me a draft of your post about Will’s book to check, I said yes.
The next week you sent me a couple more emails with different versions of the draft. When I saw that the draft was 18 pages of technical material, I realized I wasn’t going to be a good person to review it. That’s when I forwarded to someone on Will’s team asking if they could look at it instead of me.
I should never have done that, because your original email asked me not to share it with anyone. For what it’s worth, the way that this happened is that when I was deciding what to do with the last email in the chain, I didn’t remember and didn’t check that the first email in the chain requested confidentiality. This was careless of me, and I’m very sorry about it.
I think the underlying mistake I made was not having this kind of situation flagged as sensitive in my mind, which contributed to my forgetting the original confidentiality request. If the initial email had been about some more personal situation, I am much more sure it would have been flagged in my mind as confidential. But because this was a critique of a book, I had it flagged as something like “document review” in my mind. This doesn’t excuse my mistake—and any breach of trust is a serious problem given my role—but I hope it helps show that it wasn’t intentional.
I now try to be much more careful about situations where I might make a similar mistake.
I’ve now added info on this to the post about being a contact person and to CEA’s mistakes page.
Personally, I don’t really blame you or think less of you for this screwup. I never got the impression that you are the sort of person who should be sent confidential book review drafts. Maybe you’d disagree, but that seems like a misunderstanding of your role to me.
It seemed clear to me that you made yourself available to confidential reports regarding conflict, abuse, and community health. Not disagreements with a published book. It makes sense that you didn’t have a habit of mentally flagging those emails as confidential.
Regardless, I trust that you’ve been more careful since then, and I appreciate how clearly you own up to this mistake.
I want to offer my +1 that I strongly believe Julia’s trustworthy for reports regarding Leverage.
I would generally expect that if I give someone access to a draft of any kind and they want to forward it to someone else to put the author of the draft in the CC. Even of the absence of the promise of confidentiality, I consider sharing someone’s draft without their permission and witholding the information that you share it bad behavior.
Of course it doesn’t show that it wasn’t intentional to say “my mistake wasn’t intentional but accidental”. The only thing that shows that it wasn’t intentional would be to take actual consequences that are meaningful enough so that it doesn’t look like you benefited CEA with your mistake.
Saying “I’m sorry I broke your trust” without engaging in any consequences for it feels cheap. To me such a mistake feels like you owe something to guzey.
One thing you could have done if you actually cared would have been to advocate for guzey in this exchange even if that goes against your personal positions.
Only admitting the mistake at comments and not in a more visible manner also doesn’t feel like you treat it seriously enough. It likely deserves the same treatment as the mistakes on https://www.centreforeffectivealtruism.org/our-mistakes
For what it’s worth, I do think this is probably a serious enough mistake to go on this page.
Wow, that is very bad. Personally I’d still trust Julia as someone to report harms from Leverage to, mostly from generally knowing her and knowing her relationship to Leverage, but I can see why you wouldn’t.
Is Leverage willing to grant a blanket exemption from the NDAs which people evidently signed, to rectify the potential ongoing harms of not having information available? If not, can you share the text of the NDAs?
Hi Larissa -
Please consider that the people who most experienced harms from psychological practices at Leverage may not feel comfortable emailing that information to you. Given what they experienced, they might reasonably expect the organization to use any provided information primarily for its own reputational defense, and to discredit the harmed parties.
Thank you for the clarity here.
This is not my understanding. My impression is that a strong expectation was established by individual trainers with their trainees. And that charting was generally done during the hiring process. Even if the stated policy was that it was not required/mandatory.
It seems that Leverage currently in planning to publish a bunch of their techniques and from Leverages point of view, there are considerations that releasing the techniques could be dangerous for people using them. To me that does suggest a sincere desire to use provided information in a useful way.
See from https://www.lesswrong.com/posts/3GgoJ2nCj8PiD4FSi/updates-from-leverage-research-history-and-recent-progress :
If there are particular people who feel that they have been damaged, it would be great to still have a way that the information reaches Leverage. Maybe, a third-party could be found to mediate the conversation?
Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
Why is this getting downvotes? It’s a constructive comment containing a good idea (mediation to address concerns) and pointing at a source of transparency, which everyone here has been asking for.
I’m not a rationalist, and I’m new to actually saying anything on LW (despite lurking for 4ish years now—and yes, I made this alt today), but it seems like this would be the type of community to be more open-minded about a topic than what I’m seeing. By “what I’m seeing” I mean people are just throwing rocks and being unwilling to find any way to work with someone who’s trying to address the concerns of the OP and commenters.
I didn’t downvote ChristianKI’s comment, but I feel like it’s potentially a bit naive.
>Is there anything else you could think of that would be a credible signal that Leverage is sincere about seeking the information about harms?
In my view, the question isn’t so much about whether they genuinely don’t want harms to happen (esp. because harming people psychologically often isn’t even good for growing the organization, not to mention the reputational risks). I feel like the sort of thing ChristianKI pointed out is just a smart PR move given what people already think about Leverage, and, conditional on the assumption that Leverage is concerned about their reputation in EA, it says nothing about genuine intentions.
Instead, what I’d be curious to know is whether they have the integrity to be proactively transparent about past mistakes, radically changed course when it comes to potentially harmful practices, and refrain from using any potentially harmful practices in cases where it might be advantageous on a Machiavellian-consequentialist assessment. To ascertain those things, one needs to go beyond looking at stated intentions. “Person/organization says nice-sounding thing, so they seem genuinely concerned about nice aims, therefore stop being so negative” is a really low bar and probably leads to massive over-updating in people who are prone to being too charitable.
I didn’t argue that it says something about good intentions. My main argument is that it’s useful to cooperate with Leverage on releasing their techniques with the safety warnings that are warrented given past problems instead of not doing that which increases the chances that people will use the techniques in a way that messes them up.
I do consider belief reporting to be a very valuable invention and I think that it’s plausible that this is true for more of what leverage produced. I do see that a technique like belief reporting allows for doing scientific experiments that weren’t possible before.
Information gathered from the experiments already run can quite plausible help other people from encoutering harm when integrated in the starter kit that they develop.
I think skepticism about nice words without difficult-to-fake evidence is warranted, but I also think some of this evidence is already available.
For example, I think it’s relatively easy to verify that Leverage is a radically different organization today. The costly investments we’ve made in history of science research provide the clearest example as does the fact that we’re no longer pursuing any new psychological research.
I think the fact that it is now a four person remote organization doing mostly research on science as opposed to an often-live-in organization with dozens of employees doing intimate psychological experiments as well as following various research paths tells me that you are essentially a different organization and the only commonalities are the name and the fact that Geoff is still the leader.
If you hover over the karma counter, you can see that the comment is sitting at −2 with 12 votes, which means that there is a significant disagreement on how to judge it, not agreement that it should go away.
(It makes some sense to oppose somewhat useful things that aren’t as useful as they should be, or as safe as they should be, I think that is the reason for this reaction. And then there is the harmful urge to punish people who don’t punish others, or might even dare suggest talking to them.)
What are your personal connections, if any, to Leverage Research (either “1.0” or “2.0″)?
I’d rather not say, for the sake of my anonymity—something which is important to me because this:
is a real concern. I’ve seen it firsthand—people associated with Leverage being ostracized, bullied, and made to feel very unwelcome and uncomfortable at social events and in online spaces by people in nearby communities, including this one.
It seems like a real risk to me that any amount of personal information I give will be used to discover my identity, and I’ll be subject to the same.
Which, by the way, is despicable, and I find it alarming that only one person (besides Kerry) in this thread has acknowledged this behavior pattern.
I said in another comment that I didn’t make an alt to come here and “defend Leverage”—this instance is the exception to that. These people are human beings.
(quote from Kerry’s comment: https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-facts-about-leverage-research-1-0-1?commentId=hqDXAtk6cnqDStkGC)
(I’m aware that this comment is intense; @gjm that intensity is not intended to be directed at you, but at the situation as a whole.)
If people are being bullied, that’s extremely bad, and if you see that and call it out you’re doing a noble thing.
But all I’ve seen in this thread—I can’t comment on e.g. what happens in person in the Bay Area, since that’s thousands of miles away from where I am—is people saying negative things about Leverage Research itself and not about individuals associated with it, with the single exception of the person in charge of Leverage, who fairly credibly deserves some criticism if the negative things being said about the organization are correct.
Bullying people is cruel and harmful. I’m not so sure there’s anything wrong with “bullying” an organization. Especially if that organization is doing harm, or if there is good reason to think it is likely to do harm in the future.
I’ve seen someone from a different org, but with a similar valence in the community, get treated quite poorly at a party when they let their association be known. It was like the questioner stopped seeing them as a person with feelings and only treated them as an extension of the organization. I felt gross watching it and regret not saying anything at the time.
It seems overwhelmingly likely to me that Leveragers faced the same thing, and also that some members lumped some legitimate criticisms or refusals to play along in with this unacceptable treatment, because that’s a human thing to do.
ETA: I talked to the person in question and they don’t remember this, so apparently it made a bigger emotional impression on me than them (they remembered a different convo at the same event that seemed like the same kind of thing, but didn’t report it being particularly unpleasant). I maintain that if I were regularly subject to what I saw it would have been quite painful, and imagine that to be true for at least some other people.
There’s a pragmatic question of building reliable theory of what’s going on, which requires access to the facts. Even trivial inconvenience for those who have the facts in communicating them does serious damage to this process’s ability to understand what’s going on.
The most valuable facts are those that contradict the established narrative of the theory, they can actually be relevant for improving it, for there is no improvement without change. Seeing a narrative that contradicts the facts someone has is already disheartening, so everything else that could possibly be done to make sharing easier, and not make it harder, should absolutely be done.
Yes, but imagine for a second that you worked at Leverage, and you’re reading this thread (noting that I’d be surprised if several people from both 1.0 and 2.0 were not). Do you think that, whether they had a negative experience or a positive experience, they would feel comfortable commenting on that here?
(This is the relevant impact of the things mentioned in my previous comment.)
No. Of course not. Because the overpowering narrative in this thread, regardless of the goals or intentions of the OP, is “Leverage was/is a cult”.
No one accused of being in a cult is going to come into the community of their accusers and say a word. Of course, with the exception of two people in 2.0 who have posted here, one of which is a representative who has been accused of plotting to coerce and manipulate victims, and the other of which has been falsely accused of trying to hide their identity in the thread.
And this is despite Leverage’s efforts to become more legible and transparent.
If someone who worked there had negative experiences as a result, then, of course, they may not want to post publicly in an environment where the initiative that they once put their time, energy, and effort into is being so highly criticized, and in some cases, again, blatantly accused of being a literal cult or what I would call a “strawman’s term” for a cult. They also may not want to air their concerns with their ex-employers in this public setting.
And on the other hand, if someone who worked there had positive experiences, they are left to watch as, once again, the discourse of this group disallows them from giving input without figuratively burning them at the stake for supporting something that they personally experienced and had no issue with.
And these are just the first few things that came to mind for me when considering why they may not be present in this conversation.
My main concern here is that this space doesn’t allow them to speak AT ALL without serious repercussions, and that is caused by the pattern I mentioned in my comment above. Because of this, the discourse around Leverage Research on this thread (while there has still been new information exchanged, and I do not want to discount that) is doomed to be an echo chamber between people who are degrees (plural) away from whatever the truth may be.
This is my takeaway from this entire thread, and it’s a shame.
(Sorry for using the words “of course” “accused”/”accusers” etc so frequently—I am tired.)
I don’t know how comfortable any given person would feel commenting here. I do know that Kerry Vaughan, who is with Leverage now, has evidently felt comfortable enough to comment. I have no idea who you are but it seems fairly apparent that you have some association with Leverage, and you evidently feel comfortable enough to comment.
You say that one of those people (presumably meaning Kerry) “has been accused of plotting to coerce and manipulate victims”. I can’t find anywhere where anyone has made any such accusation. I can’t find any instance of “coerce” or any of its forms other than in your comment above. I find two other instances of “manipulate” and related words; one is specifically about Geoff Anders (who so far as I know is not the same person as Kerry Vaughan) and the other is talking generally about psychological manipulation and doesn’t make any accusations about specific people.
You say that the other person (presumably meaning you) “has been falsely accused of trying to hide their identity”, but so far as I can make out you are openly trying to hide your identity (on the grounds that if people could tell who you are then you would be mistreated on account of being associated with Leverage).
(I have to say that I’m a bit confused by the anonymity thing. Are you concerned that if you were onymous then people “in real life” would read what you say here, realise that you’re associated with Leverage, and mistreat you? Or that if you were onymous then people here would recognize your name, realise that you’re associated with Leverage, and mistreat you? Or something else? The first would make sense only if “in real life” you were concealing whatever associations you have with Leverage, which I have to say would itself be a bit concerning; the second would make sense only if knowing your name would make people in this thread think you more closely associated with Leverage than they already think you, and unless you’re Literal Geoff Anders or something that seems a little unlikely. And I’m not sure what “something else” might be.)
Saying that someone is in a cult (though I note that most people have been pretty careful not to use quite that terminology) isn’t an accusation. Not at the person in question, anyway. For sure it’s the sort of thing that many people will find uncomfortable. But what’s uncomfortable here is the content of the claim itself, no? So what less-bullying thing would you prefer someone to do, if they are concerned that an organization other people around them might join is worryingly cult-like? Should they just not say anything, because saying “X is cult-like” is bullying? That policy means never being able to give warning to people who might be getting ensnared by an actual cult. What’s the alternative?
No comment on your larger point but
“You are in a cult” is absolutely an accusation directed at the person. I can understand moral reasons why someone might wish for a world in which people assigned blame differently, and technical reasons why this feature of the discourse makes purely descriptive discussions unhelpfully fraught, but none of that changes the empirical fact that “You are in a cult” functions as an accusation in practice, especially when delivered in a public forum. I expect you’ll agree if you recall specific conversations-besides-this-one where you’ve heard someone claim that another participant is in a cult.
Maybe you’re right. So, same question as for ooverthinkk: suppose you think some organization that people you know belong to is a cult, or has some of the same bad features as cults. What should you do?
(It seems to me that ooverthinkk feels that at least some of what is being said in this thread about Leverage is morally wrong, and I hope there’s some underlying principle that’s less overreaching than “never say that anything is cult-like” and less special-pleading than “never say bad things about Leverage”—but I don’t yet understand what that underlying principle is.)
(edit: moved to the correct reply area)
The first person was Larissa, the second person was Kerry.
The “anonymity thing” does not fall under the first category. I’d just prefer, as I stated before, not to be targeted “in real life” for my views on this thread.
The “bullying” that I’m referring to happened/happens outside of this thread, and is in no way limited to instances of people being accused of being “in a cult”.
D’oh! I’d forgotten that Larissa had commented here too. My apologies.
As I’ve said, I have no knowledge of any bullying that may or may not be occurring elsewhere (especially in person in the Bay Area), and if anyone’s getting bullied then that’s bad. If that isn’t common knowledge, then there’s a problem. But the things in this thread that you’ve taken exception to don’t seem to me to come close to bullying. (Obviously, though, they could be part of a general pattern of excessive hostility to all things Leverage.)
Do you think OP was wrong to post what they did? If so, is that because you think the things they’ve said about Leverage are factually wrong, or because you think people who think they see an organization behaving in potentially harmful ways shouldn’t say so, or what?
Bullshit. This is not how you prevent abuse of power. This is how you cover it up.
Have you even read the default comment guidelines? Hint: they’re right below where you’re typing.
For your reference:
Default comment guidelines:
Aim to explain, not persuade
Try to offer concrete models and predictions
If you disagree, try getting curious about what your partner is thinking
Don’t be afraid to say ‘oops’ and change your mind
Let’s use some common sense here, please. If—hypothetically speaking—some organization abuses people, what is the most likely consequence if the victim e-mails in confidence their PR person?
My model says, the PR person will start working on a story that protects the organization, with the advantage that the PR person can publish their version before the victim does. (There are also other options, such as threatening the victim, which wouldn’t be available if the victim told their story to someone else first.)
The content of comment guidelines is not a reason to follow them.
Social science infohazards are not a thing because they must be implemented by an organization to work and organizations leak like a sieve. Even nuclear secrets leak. This demand for secrecy is an blatant excuse used to obstruct oversight and to prevent peer review. What you’re doing is the opposite of science.
Interestingly, “peer review” occurs pretty late in the development of scientific culture. It’s not something we see in our case studies on early electricity, for example, which currently cover the period between 1600 and 1820.
What we do see throughout the history is the norm of researchers sharing their findings with others interested in the same topics. It’s an open question whether Leverage 1.0 violated this norm. On the one hand, they had a quite vibrant and open culture around their findings internally and did seek out others who might have something to offer to their project. On the other hand, they certainly didn’t make any of this easily accessible to outsiders. I’m inclined to think they violated some scientific norms in this regard, but I think the work they were doing is pretty clearly science albeit early stage science.
I want to draw attention to the fact that “Kerry Vaughan” is a brand new account that has made exactly three comments, all of them on this thread. “Kerry Vaughan” is associated with Leverage. “Kerry Vaughan”’s use of “they” to describe Leverage is deliberately misleading.
If “it’s not unscientific because it merely takes science back 200-400 years” is the best defense that LEVERAGE ITSELF can give for its own epistemic standards then any claims it has to scientific rigor are laughable. 1600 was the time of William Shakespeare.
Edit: I’m not saying that science in 1600 was laughable. I’m saying that performing 1600-style science today is laughable.
I’m not hiding my connection to Leverage which is why I used my real name, mentioned that I work at Leverage in other comments, and used “we” in connection with a link to Leverage’s case studies. I used “they” to refer to Leverage 1.0 since I didn’t work at Leverage during that time.
To be fair, KV was open about that association in both previous comments, using ‘we’ in the first and including this disclaimer in the second --
-- which also seems to explain the use of ‘they’ in KV’s third comment, which referred specifically to “Leverage 1.0”.
(I hope this goes without saying on LW, but I don’t mean this as a general defense of Leverage or of KV’s opinions. I know nothing about either beyond what I’ve read here, and I haven’t even read all the relevant comments. Personally I wouldn’t get involved with an organisation like Leverage.)
The problem is that currents academic standards lead to fields like psychology being very unproductive.
Experimenting with going back to scientific norms from before the great stagnation is one way to work to achieve scientific progress.
(This account is the same Kerry btw, my guess is Kerry happened to try logging in with google, which doesn’t actually connect to existing accounts)
I don’t think that’s my account actually. It’s entirely possible that I never created a LW account before now.