Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)
For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.
The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that’s a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI.
Like, I’ve talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I’ve seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it’s also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR).
I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven’t heard of people being “quarantined from their friends” because they attracted some “set of demons/bad objects that might infect others when they come into contact with them”, which feels to me like a different level of social isolation, and is part of the thing that happened in Leverage near the end. Whereas I’ve never heard of anything even remotely like this happening at MIRI or CFAR.
To be clear, I think this kind of purity dynamic is also present in other contexts, like high-class/low-class dynamics, and various other problematic common social dynamics, but I haven’t seen anything that seems to result in as much social isolation and alienation, in a way that seemed straightforwardly very harmful to me, and more harmful than anything comparable I’ve seen in the rest of the community (though not more harmful than what I have heard from some people about e.g. working at Apple or the U.S. military, which seem to have very similarly strict procedures and also a number of quite bad associated pathologies).
The other biggest thing that feels important to distinguish between what happened at Leverage and the rest of the community is the actual institutional and conscious optimization that has gone into PR control.
Like, I think Ben Hoffman’s point about “Blatant lies are the best kind!” is pretty valid, and I do think that other parts of the community (including organizations like CEA and to some degree CFAR) have engaged in PR control in various harmful but less legible ways, but I do think there is something additionally mindkilly and gaslighty about straightforwardly lying, or directly threatening adversarial action to prevent people from speaking ill of someone, in the way Leverage has. I always felt that the rest of the rationality community had a very large and substantial dedication to being very clear about when they denotatively vs. connotatively disagree with something, and to have a very deep and almost religious respect for the literal truth (see e.g. a lot of Eliezer’s stuff around the wizard’s code and meta honesty), and I think the lack of that has made a lot of the dynamics around Leverage quite a bit worse.
I also think it makes understanding the extent of the harm and ways to improve it a lot more difficult. I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community. As a concrete example, I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting), and I feel pretty confident that I would feel very different if I had similarly bad experiences with CFAR or MIRI, based on my interactions with both of these organizations.
I think this kind of information control feels like what ultimately flips things into the negative for me, in this situation with Leverage. Like, I think I am overall pretty in favor of people gathering together and working on a really intense project, investing really hard into some hypothesis that they have some special sauce that allows them to do something really hard and important that nobody else can do. I am also quite in favor of people doing a lot of introspection and weird psychology experiments on themselves, and to try their best to handle the vulnerability that comes with doing that near other people, even though there is a chance things will go badly and people will get hurt.
But the thing that feels really crucial in all of this is that people can stay well-informed and can get the space they need to disengage, can get an external perspective when necessary, and somehow stay grounded all throughout this process. Which feels much harder to do in an environment where people are directly lying to you, or where people are making quite explicit plots to discredit you, or harm you in some other way, if you do leave the group, or leak information.
I do notice that in the above I make various accusations of lying or deception by Leverage without really backing it up with specific evidence, which I apologize for, and I think people reading this should overall not take comments like mine at face value before having heard something pretty specific that backs up the accusations in them. I have various concrete examples I could give, but do notice that doing so would violate various implicit and explicit confidentiality agreements I made, that I wish I had not made, and I am still figuring out whether I can somehow extract and share the relevant details, without violating those agreements in any substantial way, or whether it might be better for me to break the implicit ones of those agreements (which seem less costly to break, given that I felt like I didn’t really fully consent to them), given the ongoing pretty high cost.
When it comes to agreements preventing disclosure of information often there’s no agreement to keep the existence of the agreement itself secret. If you don’t think you can ethically (and given other risks) share the content that’s protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It’s worthwhile to know who thinks they need to be protected by secrecy agreements.
It has taken me about three days to mentally update more fully on this point. It seems worth highlighting now, using quotes from Oli’s post:
I’ve talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I’ve seen
I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community.
I am beginning to suspect that, even in the total privacy of their own minds, there are people who went through something at Leverage who can’t have certain thoughts, out of fear.
I believe it is not my place (or anyone’s?) to force open a locked door, especially locked mental doors.
Zoe’s post may have initially given me the wrong impression—that other ex-Leverage people would also be able to articulate their experiences clearly and express their fears in a reasonable and open way. I guess I’m updating away from that initial impression.
//
I suspect ‘combining forces’ with existing heavy-handed legal systems can sometimes be used in such a dominant manner that it damages people’s epistemics and health. And this is why a lot of ‘small-time’ orgs and communities try to avoid attention of heavy-handed bureaucracies like the IRS, psych wards, police depts, etc., which are often only called upon in serious emergencies.
I have a wonder about whether a small-time org willing to use (way above weight class) heavy-handed legal structures (like, beyond due diligence, such as actual threats of litigation) is evidence of that org acting in bad faith or doing something bad to its members.
I’ve signed an NDA at MAPLE to protect donor information, but it’s pretty basic stuff, and I have zero actual fear of litigation from MAPLE, and the NDA itself is not covering things I expect I’ll want to do (such as leak info about funders). I’ve signed NDAs in the past for keeping certain intellectual property safe from theft (e.g. someone’s inventing a new game and don’t want others to get their idea). These seem like reasonable uses of NDAs.
When I went to my first charting session at Leverage, they … also asked me to sign some kind of NDA? As a client. It was a little weird? I think they wanted to protect intellectual property of their … I kind of don’t really remember honestly. Maybe if I’d tried to publish a paper on Connection Theory or Charting or Belief Reporting, they would have asked me to take it down. ¯\_(ツ)_/¯
maybe an unnecessary or heavy-handed integration between an org and legal power structures is a wtf kind of sign and seems good to try to avoid?
I really don’t know about the experience of a lot of the other ex-Leveragers, but the time it took her to post it, the number and kind of allies she felt she needed before posting it, and the hedging qualifications within the post itself detailing her fears of retribution, plus just how many peoples’ initial responses to the post were to applaud her courage, might give you a sense that Zoe’s post was unusually, extremely difficult to make public, and that others might not have that same willingness yet (she even mentions it at the bottom, and presumably she knows more about how other ex-Leveragers feel than we do).
I, um, don’t have anything coherent to say yet. Just a heads up. I also don’t really know where this comment should go.
But also I don’t really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go.
I think there’s something wrong with the OP. I don’t know what it is, yet. I’m hoping someone else might be able to work it out, or to see whatever it is that’s causing me to say “something wrong” and then correctly identify it as whatever it actually is (possibly not “wrong” at all).
On the one hand, I feel familiarity in parts of your comment, Anna, about “matches my own experiences/observations/hearsay at and near MIRI and CFAR”. Yet when you say “sensible”, I feel, “no, the opposite of that”.
Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. “I believed that I was intrinsically evil” and “[Michael Vassar] was commenting on social epistemology”), I nevertheless have this impression that I most naturally conceptualize as “this post contained no actual things”. While reading it, I felt like I was gazing into a lake that is suspended upside down in the sky, and trying to figure out whether the reflections I’m watching in its surface are treetops or low-hanging clouds. I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.
There’s something about nearly every phrase (and sentence, and paragraph, and section) here that I just, I just want to spit out, as though the phrase itself thinks it’s made of potato chunks but in fact, out of the corner of my eye, I can tell it is actually made out of a combination of upside-down cloud reflections and glass shards.
Let’s try looking at a particular, not-very-carefully-chosen sentence.
As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
I have so many questions. “As a consequence” seems fine; maybe that really is potato chunks. But then, “the people most mentally concerned” happens, and I’m like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have “with strange social metaphysics”, and I want to know “what is social metaphysics?”, “what is it for social metaphysics to be strange or not strange?” and “what is it to be mentally concerned with strange social metaphysics”? Next is “were marginalized”. How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized? And I’m going to stop there because it’s a long sentence and my reaction just goes on this way the whole time.
I recognize that it’s possible to ask this many questions of this kind about absolutely any sentence anyone has ever uttered. Nevertheless, I have a pretty strong feeling that this sentence calls for such questions, somehow, much more loudly than most sentences do. And the questions the sentences call for are rarely answered in the post. It’s like a tidal wave of… of whatever it is. More and more of these phrases-calling-for-questions pile up one after another, and there’s no time in between to figure out what’s going on, if you want to follow the post whatsoever.
There are definitely good things in here. A big part of my impression of the author, based on this post, is that they’re smart and insightful, and trying to make the world better. I just, also have this feeling like something… isn’t just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they’re eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.
This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It’s a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it’s a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can’t be directly coerced into those things, and this epistemic status is not clearly associated with it.
How do you think Zoe’s post, or mainstream journalism about the rationalist community (e.g. Cade Metz’s article, perhaps there are other better ones I don’t know about) compare on this metric? Are there any examples of particularly good writeups about the community and its history you know about?
I’m not saying that the post isn’t good (I did say it’s coherent and compelling), and I’m not at this moment aware of something better on its topic (though my ability to remain aware of such things is low, so that doesn’t mean much). I’m saying specifically that gear density is low, so it’s less suitable for belief formation than hypothesis setup. This is relevant as a more technical formulation of what I’m guessing LoganStrohl is gesturing at.
I think investigative journalism is often terrible, as is philosophy, but the concepts are meaningful in characterizing types of content with respect to gear density, including high quality content.
I am intending this more as contribution of relevant information and initial models than firm conclusions; conclusions are easier to reach the more different relevant information and models are shared by different people, so I suppose I don’t have a strong disagreement here.
Sure, and this is clear to me as a practitioner of the yoga of taking in everything only as a hypothesis/narrative, mining it for gears, and separately checking what beliefs happen to crystallize out of this, if any. But for someone who doesn’t always make this distinction, not having a clear indication of the status of the source material needlessly increases epistemic hygiene risks, so it’s a good norm to make epistemic status of content more legible. My guess is that LoganStrohl’s impression is partly of violation of this norm (which I’m not even sure clearly happened), shared by a surprising number of upvoters.
Do you predict Logan’s comment would have been much different if I had written “[epistemic status: contents of memory banks, arranged in a parseable semicoherent narrative sequence, which contains initial models that seem to compress the experiences in a Solomonoff sense better than alternative explanations, but which aren’t intended to be final conclusions, given that only a small subset of the data has been revealed and better models are likely to be discovered in the future]”? I think this is to some degree implied by the title which starts with “My experience...” so I don’t think this would have made a large difference, although I can’t be sure about Logan’s counterfactual comment.
I’m not sure, but the hypothesis I’m chasing in this thread, intended as a plausible steelman of Logan’s comment, thinks so. One alternative that is also plausible to me is motivated cognition that would decry undesirable source material for low gear density, and that one predicts little change in response to more legible epistemic status.
If you are genuinely asking, I think cutting that down into something slightly less clinical sounding (because it sounds sarcastic when formalized) would probably take a little steam out of that type of opposition, yes.
This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn’t engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:
I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me.
Very helpful to have a crisp example of this in text.
ETA: I blanked out the first few times I read Jessica’s post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.
I understood the first sentence of your comment to be something like “one of my hypotheses about Logan’s reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey.”
That makes sense to me as a hypothesis, if I’ve understood you, though I’d be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.
I didn’t follow the rest of the comment, mostly due to various words like “this” and “it” having ambiguous referents. Would you be willing to try everything after “attempts” again, using 3x as many words?
Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan’s specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.
Logan reports a refusal to parse the content of the OP
But then, “the people most mentally concerned” happens, and I’m like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have “with strange social metaphysics”, and I want to know “what is social metaphysics?”, “what is it for social metaphysics to be strange or not strange?” and “what is it to be mentally concerned with strange social metaphysics”? Next is “were marginalized”. How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized?
Most of this isn’t even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.
Logan locates a nonspecific problem in the OP, not in Logan’s response to it.
I just, also have this feeling like something… isn’t just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they’re eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.
This isn’t a description of a specific criticism or disagreement. This is a claim that the post is nonspecifically going to cause readers to become disoriented and trapped.
This implies a belief that it would be bad to receive information from Jessica.
If the objection isn’t that Jessica is mistaken but that she’s “going wrong,” that implies that the contents of Jessica’s mind are dangerous to interact with. This is the basic trope of Lovecraftian horror—that there are some real things the human mind can’t handle and therefore wants to avoid knowing. If something is dangerous, like nuclear waste or lions, we might want to contain it or otherwise keep it at a distance.
Since there’s no mechanism suggested, this looks like an essentializing claim. If the problem isn’t something specific that Jessica is doing or some specific transgression she’s committing, then maybe that means Jessica’s just intrinsically dangerous. Even if not, if Jessica were going to take this concern seriously, without a theory of how what she’s doing is harmful, she would have to treat all of her intentions as dangerous and self-contain.
In other words, she’d have to proceed as though she might be intrinsically evil (“isn’t just wrong here, but is going wrong, and maybe the going has momentum”), is in a hell of her own creation (“I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.”), and ought to avoid taking actions, i.e. become catatonic.
I also don’t know what “social metaphysics” means.
I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:
there were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR; at least one other than me had worked at MIRI for a significant time, and at least one had done work with MIRI on a shorter-term basis. There was, in addition, a case of someone becoming very paranoid, attacking a mental health worker, and hijacking her car, leading to jail time; this person was not an employee of either organization, but had attended multiple CFAR events including a relatively exclusive AI-focused one.
There are even cases of suicide in the Berkeley rationality community [...] associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption
a prominent researcher was going around convincing people that human-level AGI was coming in 5-15 years.
MIRI became very secretive about research. Many researchers were working on secret projects, and I learned almost nothing about these. I and other researchers were told not to even ask each other about what others of us were working on, on the basis that if someone were working on a secret project, they may have to reveal this fact. Instead, we were supposed to discuss our projects with an executive, who could connect people working on similar projects.
Someone in the community told me that for me to think AGI probably won’t be developed soon, I must think I’m better at meta-rationality than Eliezer Yudkowsky, a massive claim of my own specialness
Years before, MIRI had a non-disclosure agreement that members were pressured to sign, as part of a legal dispute with Louie Helm.
Anna Salamon said that Michael was causing someone else at MIRI to “downvote Eliezer in his head” and that this was bad because it meant that the “community” would not agree about who the leaders were, and would therefore have akrasia issues due to the lack of agreement on a single leader in their head telling them what to do.
MIRI had a “world-saving plan”. [...] Nate Soares frequently talked about how it was necessary to have a “plan” to make the entire future ok, to avert AI risk; this plan would need to “backchain” from a state of no AI risk and may, for example, say that we must create a human emulation using nanotechnology that is designed by a “genie” AI, which does a narrow task rather than taking responsibility for the entire future; this would allow the entire world to be taken over by a small group including the emulated human.
Our task was to create an integrated, formal theory of values, decisions, epistemology, self-improvement, etc (“Friendliness theory”), which would help us develop Friendly AI faster than the rest of the world combined was developing AGI (which was, according to leaders, probably in less than 20 years). It was said that a large part of our advantage in doing this research so fast was that we were “actually trying” and others weren’t. It was stated by multiple people that we wouldn’t really have had a chance to save the world without Eliezer Yudkowsky.
I heard that “political” discussions at CFAR (e.g. determining how to resolve conflicts between people at the organization, which could result in people leaving the organization) were mixed with “debugging” conversations, in a way that would make it hard for people to focus primarily on the debugged person’s mental progress without imposing pre-determined conclusions. Unfortunately, when there are few people with high psychological aptitude around, it’s hard to avoid “debugging” conversations having political power dynamics, although it’s likely that the problem could have been mitigated.
I recall talking to a former CFAR employee who was scapegoated and ousted after failing to appeal to the winning internal coalition; he was obviously quite paranoid and distrustful, and another friend and I agreed that he showed PTSD symptoms.
This is like 5-10% of the text. A curious thing is that it is actually the remaining 90-95% of the text that evoke bad feelings in the reader; at least in my case.
To compare, when I was reading Zoe’s article, I was shocked by the described facts. When I was reading Jessica’s article, I was shocked by the horrible things that happened to her, but the facts felt… most of them boring… the most worrying part was about a group of people who decided that CFAR was evil, spent some time blogging against CFAR, then some of them killed themselves; which is very sad, but I fail to see how exactly CFAR is responsible for this, when it seems like the anti-CFAR group actually escalated the underlying problems to the point of suicide. (This reminds me of XiXiDu describing how fighting against MIRI causes him health problems; I feel bad about him having the problems, but I am not sure what MIRI could possibly do to stop this.)
Jessica’s narrative is that MIRI/CFAR is just like Leverage, except less transparent. Yet when she mentions specific details, it often goes somewhat like this: “Zoe mentioned that Leverage did X. CFAR does not do X, but I feel terrible anyway, so it is similar. Here is something vaguely analogical.” Like, how can you conclude that not doing something bad is even worse than doing it, because it is less transparent?! Of course it is less transparent if it, you know, actually does not exist.
Or maybe I’m tired and failing at reading comprehension. I wish someone would rewrite the article, to focus on the specific accusations against MIRI/CFAR, and remove all those analogies-except-not-really with Zoe; just make it a standalone list of specific accusations. Then let’s discuss that.
Thanks for this articulate and vulnerable writeup. I do think we might all agree that the experience you are describing seems like a very good description of what somebody in a cult would go through while facing information that would trigger disillusionment.
I am not asserting you are in a cult, maybe I should use more delicate language, but in context I would like to point out this (to me) obvious parallel.
I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.
I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn’t like very viciously, including making quite a lot of very concrete threats (things like “I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you”). I personally have found those threats to very drastically increase the stress I experience from interfacing with Michael (and some others in his social group), and also my models of how these kinds of things happen have a lot to do with dynamics where this kind of punishment is expected if you deviate from the group norm.
I am not totally confident that Michael has played a big role in all of the bad psychotic experiences listed above, but my current best guess is that he has, and I do indeed pretty directly encourage people to not spend a lot of time with Michael (though I do think talking to him occasionally is actually great and I have learned a lot of useful things from talking to him, and also think he has helped me see various forms of corruption and bad behavior in my environment that I am genuinely grateful to have noticed, but I very strongly predict that I would have a very intensely bad experience if I were to spend more time around Michael, in a way I would not endorse in the long run).
I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.
Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren’t afaik (and neither were the 2 suicidal people), though obviously I couldn’t know about all conversations that were happening. Michael wasn’t talking much with Leverage people at the time.
I hadn’t heard of the statement about guillotines, that seems pretty intense.
I talked with someone recently who hadn’t been in the Berkeley scene specifically but who had heard that Michael was “mind-controlling” people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they’re looking away from. It’s common for there to be intense psychological reactions to this (I’m not even thinking of the psychotic break as the main one, since that didn’t proximately involve Michael; there have been other conversations since then that have gotten pretty emotionally/psychologically intense), and that it’s common for people to not want to have such reactions, although clearly at least some people think they’re worth having for the value of learning new things.
IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don’t know of any 4th case, so I believe you that they didn’t have much to do with Michael. This makes the current record 4⁄5 to me, which sure seems pretty high.
Michael wasn’t talking much with Leverage people at the time.
I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster.
I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn’t have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup that are common, with some flavor of “but don’t you see that everyone else is completely crazy and evil” thrown into it.
I think it is indeed pretty common for abusive environments to start with “here is why your current environment is abusive in this subtle way, and that’s also why it’s OK for me to do these abusive-seeming things, because it’s not worse than anywhere else”. I think this was a really large fraction of what happened with Brent, and I also think a pretty large fraction of what happened with Leverage. I also think it’s a large fraction of what’s going on with Michael.
I do want to reiterate that I do assign substantial probability mass (~15%) to your proposed hypothesis being right, and am interested in more evidence for it.
IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred
I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV’s clique though.
Ah, yeah, my model is that the person had spent a lot of time with MV’s clique, though I wasn’t super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael’s actions, seems murkier than I made it out to be in my comment.
I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you’re in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B.
I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn’t know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn’t just comparing him to the bubble of people who I already knew about.
Hmm, I’ve tried to read this comment for something like 5 minutes, but I can’t really figure out its logical structure. Let me give it a try in a more written format:
I think one of the ways of disambiguating here
Presumably this is referring to distinguishing the hypothesis that Michael is kind of causing a bunch of cult-like problems, from the hypothesis that he helping people see problems that are actually present.
here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter), people you run into in different contexts, people who have had experience in different mainstream institutions. Presumably, the more of a culty bubble you’re in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap.
I don’t understand this part. Why would there be a monotonous relationship here? I agree with the bubble part, and while I expect there to be a vague correlation, it doesn’t feel like it measures anything like the core of what’s going on. I wouldn’t measure the cultishness of an economics department based on how good they are at talking to improv-students. It might still be good for them to get better at talking to improv students, but failure to do so doesn’t feel like particularly strong evidence to me (compared to other dimensions, like the degree to which they feel alienated from the rest of the world, or have psychotic breaks, or feel under a lot of social pressure to not speak out, or many other things that seem similarly straightforward to measure but feel like they get more at the core of the thing).
But also, I don’t understand how I am supposed to disambiguate things here? Like, maybe the hypothesis here is that by doing this myself I could understand how insular my own environment is? I do think that seems like a reasonable point of evidence, though I also think my experiences have been very different from people at MIRI or CFAR. I also generally don’t have a hard time establishing communication protocols across these kinds of gaps, as far as I can tell.
who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn’t know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn’t just comparing him to the bubble of people who I already knew about.
This is interesting, and definitely some evidence, and I appreciate you mentioning it.
If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it’s relevant?
E.g. say you’re an econ student, and there’s this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you’re in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn’t taking in a lot of relevant information.
(basically, a lot of what I’m asserting constitutes “being in a cult” is living in a simulation of an artificially small, closed world)
The update was more straightforward, based on “I looked at some things that are definitely cults, what Michael does seems less extremal and insular in comparison, therefore it seems less likely for Michael to run into the same problems”. I don’t think that update required agreeing with your theory to any substantial degree.
I do think your paragraph still clarified things a bit for me, though with my current understanding, presumably the group to compare yourself against are less cults, and more just like, average people who are somewhat further out on some interesting dimension. And if you notice that average people seem really crazy and cult-like to you, then I do think this is something to pay attention to (though like, average people are also really crazy on lots of topics, like schooling and death and economics and various COVID related things that I feel pretty confident in, and so I don’t think this is some kind of knockdown argument, though I do think having arrived at truths that large fractions of the population don’t believe definitely increase the risks from insularity).
I definitely don’t want to imply that agreement with the majority is a metric, rather the ability to have a discussion at all, to be able to see part of the world they’re seeing and take that information into account in your own view (which might be called “interpretive labor” or “active listening”).
Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn’t hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don’t think is evidence of something going terribly wrong)
Oh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults.
Do you know if the 3 people who were talking significantly with Michael did LSD at the time or with him?
Erm… feel free to keep plausible deniability. Taking LSD seems to me like a pretty worthwhile thing to do in lots of contexts and I’m willing to put a substantial amount of resources to defending against legal attacks (or supporting you in the face of them) that are caused by you replying openly here. (I don’t know if that’s plausible, I’ve not thought about it much so mentioned it anyway.)
I had taken a psychedelic previously with Michael; one other person probably had; the other probably hadn’t; I’m quite unsure of the latter two judgments. I’m not going to disambiguate about specific drugs.
I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people’s desire to be a morally good person in a way that they don’t endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world’s top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.
I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above).
Do you have an idea of when those things were directed at Holden?
UPDATE: I mostly retract this comment. It was clarified that the threat was made in a mostly public context which changes the frame for me significantly.
I think it is problematic to post a presumably very private communication (the threat) to such a broad audience. Even when it is correctly attributed it lacks all the context of the situation it was uttered in. It lacks any amends that way or may not have been made and exposes many people to the dynamics of the narrative resulting from the posting here. I’m not saying you shouldn’t post it. I don’t know the context and what you know either. But I think you should take ownership of the consequences of citing it and anyway it might escalate from here (a norm proposed by Scott Adams a while ago).
I don’t think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication.
I didn’t downvote, but I almost did because it seems like it’s hard enough to reveal that kind of thing without also having to worry about social disapproval.
I, in fact, asked a CFAR instructor in 2016-17 whether the idea was to psychologically improve yourself until you became Elon Musk, and he said “yes”. This part of the plan was the same.
Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.
Yeah, I very strongly don’t endorse this as a description of CFAR’s activities or of CFAR’s goals, and I’m pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I’m less surprised).
Most of my probability mass is on the CFAR instructor was taking “become Elon Musk” to be a sort of generic, hyperbolic term for “become very capable.”
The person I asked was Duncan. I suggested the “Elon Musk” framing in the question. I didn’t mean it literally, I meant him as an archetypal example of an extremely capable person. That’s probably what was meant at Leverage too.
I also have zero memory of this, and it is not the sort of sentiment I recall holding in any enduring fashion, or putting forth elsewhere.
I suspect I intended my reply pretty casually/metaphorically, and would have similarly answered “yes” if someone had asked me if we were trying to improve ourselves to become any number of shorthand examples of “happy, effective, capable, and sane.”
2016 Duncan apparently thought more of Elon Musk than 2021 Duncan does.
One of the weirdest ideas in Bay Area rationalist/adjacent circles is that you become someone like e.g. Elon Musk, hyper-productive and motivated, by introspecting a ton
There was an atmosphere of psycho-spiritual development, often involving Kegan stages.
I am confused, because I assumed that Kegan stages are typically used by people who believe they are superior to LW-style rationalists. You know, “the rationalists believe in objective reality, so they are at Kegan level 4, while I am a post-rationalist who respects deep wisdom and religion, so I am at Kegan level 5.”
Though I can’t find an example of him posting on LessWrong, Ethan Dickinson is in the Berkeley rationality community and is mentioned here as introducing people to Kegan stages. There are multiple others, these are just the people who it was easy to find Internet evidence about.
There’s a lot of overlap in people posting about “rationalism” and “postrationalism”, it’s often a matter of self-identification rather than actual use of different methods to think, e.g. lots of “rationalists” are into meditation, lots of “postrationalists” use approximately Bayesian analysis when thinking about e.g. COVID. I have noticed that “rationalists” tend to think the “rationalist/postrationalist” distinction is more important than the “postrationalists” do; “postrationalists” are now on Twitter using vaguer terms like “ingroup” or “TCOT” (this corner of Twitter) for themselves.
I also mentioned a high amount of interaction between CFAR and Monastic Academic in the post.
To speak a little bit on the interaction between CFAR and MAPLE:
My understanding is that none of Anna, Val, Pete, Tim, Elizabeth, Jack, etc. (current or historic higher-ups at CFAR) had any substantial engagement with MAPLE. My sense is that Anna has spoken with MAPLE people a good bit in terms of total hours, but not at all a lot when compared with how many hours Anna spends speaking to all sorts of people all the time—much much less, for instance, than Anna has spoken to Leverage folks or CEA folks or LW folks.
I believe that Renshin Lee (née Lauren) began substantially engaging with MAPLE only after leaving their employment at CFAR, and drew no particular link between the two (i.e. was not saying “MAPLE is the obvious next step after CFAR” or anything like that, but rather was doing what was personally good for them).
I think mmmmaybe a couple other CFAR alumni or people-near-CFAR went to MAPLE for a meditation retreat or two? And wrote favorably about that, from the perspective of individuals? These (I think but do not know for sure) include people like Abram Demski and Qiaochu Yuan, and a small number of people from CFAR’s hundreds of workshop alumni, some of whom went on to engage with MAPLE more fully (Alex Flint, Herschel Schwartz).
But there was also strong pushback from CFAR staff alumni (me, Davis Kingsley) against MAPLE’s attempted marketing toward rationalists, and its claims of being an effective charity or genuine world-saving group. And there was never AFAIK a thing which fifty-or-more-out-of-a-hundred-people would describe as “a high amount of interaction” between the two orgs (no co-run events, no shared advertisements, no endorsements, no long ongoing back and forth conversations between members acting in their role as members, no trend of either group leaking members to the other group, no substantial exchange of models or perspectives, etc). I think it was much more “nodding respectfully to each other as we pass in the hallway” than “sitting down together at the lunch table.”
I could be wrong about this. I was sort of removed-from-the-loop of CFAR in late 2018/early 2019. It’s possible there was substantial memetic exchange and cooperation after that point.
But up until that point, there were definitely no substantive interactions, and nothing ever made its way to my ears in 2019 or 2020 that made me think that had changed.
I’m definitely open to people showing me I’m wrong, here, but given my current state of knowledge the claim of “high interaction between CFAR and Monastic Academy” is just false.
(Where it would feel true to claim high interaction between CFAR and MIRI, or CFAR and LW, or CFAR and CEA, or CFAR and SPARC, or even CFAR and Leverage. The least of these is, as far as I can tell, an order of magnitude more substantial than the interaction between CFAR and MAPLE.)
This is Ren, and I was like ”?!?” by the sentence in the post: “There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy.”
I am having trouble engaging with LW comments in general so thankfully Duncan is here with #somefacts. I pretty much agree with his list of informative facts.
More facts:
Adom / Quincy did a two-month apprenticeship at MAPLE, a couple years after being employed by CFAR. He and I are the only CFAR employees who’ve trained at MAPLE.
CFAR-adjacent people visit MAPLE sometimes, maybe for about a week in length.
Some CFAR workshop alums have trained at MAPLE or Oak as apprentices or residents, but I would largely not call them “people who worked with or at CFAR.” There are a lot of CFAR alums, and there are also a lot of MAPLE alums.
MAPLE and Oak have applied for EA grants in the past, which have resulted in them communicating with some CFAR-y type people like Anna Salamon, but this does not feel like a central example of “interaction” of the kind implied.
The inferential gap between the MAPLE and rationalist worldview is pretty large. There’s definitely an interesting “thing” about ex-CFAR staff turning to trad religion that you might want to squint at (I am one example, out of, I believe, three total), but I don’t like the way the OP tacks this sentence onto a section as though it were some kind of argument or evidence for some vague something something. And I think that’s why my reaction was ”?!?” and not just “hmm.”
But also, I cannot deny that the intuition jessicata has about MAPLE is not entirely off either. It gives off the same smells. But I still don’t like the placement of the sentence in the OP because I think it assumes too much.
FWIW I wouldn’t necessarily say that Kegan stages are important—they seem like an interesting model in part because they feel like they map quite well to some of the ways in which my own thought has changed over time. But I still only consider them to be at the level of “this is an interesting and intuitively plausible model”; there hasn’t been enough research on them to convincingly show that they’d be valid in the general population as well.
There was a period in something like 2016-2017 some rationalists were playing around with Kegan stages in the Bay Area. Most people I knew weren’t a huge fan of them, though the then-ED of CFAR (Pete Michaud) did have a tendency of bringing them up from time to time in a way I found quite annoying. It was a model a few people used from time to time, though my sense is that it never got much traction in the community. The “often” in the above quoted sentence definitely feels surprising to me, though I don’t know how many people at MIRI were using them at the time, and maybe it was more than in the rest of my social circle at time. I still hear them brought up sometimes, but usually in a pretty subdued way, more referencing the general idea of people being able to place themselves in a broader context, but in a much less concrete and less totalizing way than the way I saw them being used in 2016-2017.
I was very peripheral to the Bay Area rationality at that time and I heard about Kegan levels enough to rub me the wrong way. Seemed bizarre to me that one man’s idiosyncratic theory of development would be taken so seriously by a community I generally thought was more discerning. That’s why I remember so clearly that it came up many times.
It was a model a few people used from time to time, though my sense is that it never got much traction in the community.
FWIW I think this understates the influence of Kegan levels. I don’t know how much people did differently because of it, which is maybe what you’re pointing at, but it was definitely a thing people had heard of and expected other people to have heard of and some people targeted directly.
Huh, some chance I am just wrong here, but to me it didn’t feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn’t feel to me like it’s very core to the community.
Here is a thread for detail disagreements, including nitpicks and including larger things, that aren’t necessarily meant to connect up with any particular claim about what overall narratives are accurate. (Or maybe the whole comment section is that, because this is LessWrong? Not sure.)
I’m starting this because local validity semantics are important, and because it’s easier to get details right if I (and probably others) can consider those details without having to pre-compute whether those details will support correct or incorrect larger claims.
For me personally, part of the issue is that though I disagree with a couple of the OPs details, I also have some other details that support the larger narrative which are not included in the OP, probably because I have many experiences in the MIRI/CFAR/adjacent communities space that Jessicata doesn’t know and couldn’t include. And I keep expecting that if I post details without these kinds of conceptualizing statements, people will use this to make false inferences about my guesses about higher-order-bits of what happened.
The post explicitly calls for thinking about how this situation is similar to what is happening/happened at Leverage, and I think that’s a good thing to do. I do think that I do have specific evidence that makes me think that what happened at Leverage seemed pretty different from my experiences with CFAR/MIRI.
Like, I’ve talked to a lot of people about stuff that happened at Leverage in the last few days, and I do think that overall, the level of secrecy and paranoia about information leaks at Leverage seemed drastically higher than anywhere else in the community that I’ve seen, and I feel like the post is trying to draw some parallel here that fails to land for me (though it’s also plausible it is pointing out a higher level of information control than I thought was present at MIRI/CFAR).
I have also had my disagreements with MIRI being more secretive, and think it comes with a high cost that I think has been underestimated by at least some of the leadership, but I haven’t heard of people being “quarantined from their friends” because they attracted some “set of demons/bad objects that might infect others when they come into contact with them”, which feels to me like a different level of social isolation, and is part of the thing that happened in Leverage near the end. Whereas I’ve never heard of anything even remotely like this happening at MIRI or CFAR.
To be clear, I think this kind of purity dynamic is also present in other contexts, like high-class/low-class dynamics, and various other problematic common social dynamics, but I haven’t seen anything that seems to result in as much social isolation and alienation, in a way that seemed straightforwardly very harmful to me, and more harmful than anything comparable I’ve seen in the rest of the community (though not more harmful than what I have heard from some people about e.g. working at Apple or the U.S. military, which seem to have very similarly strict procedures and also a number of quite bad associated pathologies).
The other biggest thing that feels important to distinguish between what happened at Leverage and the rest of the community is the actual institutional and conscious optimization that has gone into PR control.
Like, I think Ben Hoffman’s point about “Blatant lies are the best kind!” is pretty valid, and I do think that other parts of the community (including organizations like CEA and to some degree CFAR) have engaged in PR control in various harmful but less legible ways, but I do think there is something additionally mindkilly and gaslighty about straightforwardly lying, or directly threatening adversarial action to prevent people from speaking ill of someone, in the way Leverage has. I always felt that the rest of the rationality community had a very large and substantial dedication to being very clear about when they denotatively vs. connotatively disagree with something, and to have a very deep and almost religious respect for the literal truth (see e.g. a lot of Eliezer’s stuff around the wizard’s code and meta honesty), and I think the lack of that has made a lot of the dynamics around Leverage quite a bit worse.
I also think it makes understanding the extent of the harm and ways to improve it a lot more difficult. I think the number of people who have been hurt by various things Leverage has done is really vastly larger than the number of people who have spoken out so far, in a ratio that I think is very different from what I believe is true about the rest of the community. As a concrete example, I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting), and I feel pretty confident that I would feel very different if I had similarly bad experiences with CFAR or MIRI, based on my interactions with both of these organizations.
I think this kind of information control feels like what ultimately flips things into the negative for me, in this situation with Leverage. Like, I think I am overall pretty in favor of people gathering together and working on a really intense project, investing really hard into some hypothesis that they have some special sauce that allows them to do something really hard and important that nobody else can do. I am also quite in favor of people doing a lot of introspection and weird psychology experiments on themselves, and to try their best to handle the vulnerability that comes with doing that near other people, even though there is a chance things will go badly and people will get hurt.
But the thing that feels really crucial in all of this is that people can stay well-informed and can get the space they need to disengage, can get an external perspective when necessary, and somehow stay grounded all throughout this process. Which feels much harder to do in an environment where people are directly lying to you, or where people are making quite explicit plots to discredit you, or harm you in some other way, if you do leave the group, or leak information.
I do notice that in the above I make various accusations of lying or deception by Leverage without really backing it up with specific evidence, which I apologize for, and I think people reading this should overall not take comments like mine at face value before having heard something pretty specific that backs up the accusations in them. I have various concrete examples I could give, but do notice that doing so would violate various implicit and explicit confidentiality agreements I made, that I wish I had not made, and I am still figuring out whether I can somehow extract and share the relevant details, without violating those agreements in any substantial way, or whether it might be better for me to break the implicit ones of those agreements (which seem less costly to break, given that I felt like I didn’t really fully consent to them), given the ongoing pretty high cost.
When it comes to agreements preventing disclosure of information often there’s no agreement to keep the existence of the agreement itself secret. If you don’t think you can ethically (and given other risks) share the content that’s protected by certain agreements it would be worthwhile to share more about the agreements and with whom you have them. This might also be accompied by a request to those parties to agree to lift the agreement. It’s worthwhile to know who thinks they need to be protected by secrecy agreements.
It has taken me about three days to mentally update more fully on this point. It seems worth highlighting now, using quotes from Oli’s post:
I am beginning to suspect that, even in the total privacy of their own minds, there are people who went through something at Leverage who can’t have certain thoughts, out of fear.
I believe it is not my place (or anyone’s?) to force open a locked door, especially locked mental doors.
Zoe’s post may have initially given me the wrong impression—that other ex-Leverage people would also be able to articulate their experiences clearly and express their fears in a reasonable and open way. I guess I’m updating away from that initial impression.
//
I suspect ‘combining forces’ with existing heavy-handed legal systems can sometimes be used in such a dominant manner that it damages people’s epistemics and health. And this is why a lot of ‘small-time’ orgs and communities try to avoid attention of heavy-handed bureaucracies like the IRS, psych wards, police depts, etc., which are often only called upon in serious emergencies.
I have a wonder about whether a small-time org willing to use (way above weight class) heavy-handed legal structures (like, beyond due diligence, such as actual threats of litigation) is evidence of that org acting in bad faith or doing something bad to its members.
I’ve signed an NDA at MAPLE to protect donor information, but it’s pretty basic stuff, and I have zero actual fear of litigation from MAPLE, and the NDA itself is not covering things I expect I’ll want to do (such as leak info about funders). I’ve signed NDAs in the past for keeping certain intellectual property safe from theft (e.g. someone’s inventing a new game and don’t want others to get their idea). These seem like reasonable uses of NDAs.
When I went to my first charting session at Leverage, they … also asked me to sign some kind of NDA? As a client. It was a little weird? I think they wanted to protect intellectual property of their … I kind of don’t really remember honestly. Maybe if I’d tried to publish a paper on Connection Theory or Charting or Belief Reporting, they would have asked me to take it down. ¯\_(ツ)_/¯
maybe an unnecessary or heavy-handed integration between an org and legal power structures is a wtf kind of sign and seems good to try to avoid?
I really don’t know about the experience of a lot of the other ex-Leveragers, but the time it took her to post it, the number and kind of allies she felt she needed before posting it, and the hedging qualifications within the post itself detailing her fears of retribution, plus just how many peoples’ initial responses to the post were to applaud her courage, might give you a sense that Zoe’s post was unusually, extremely difficult to make public, and that others might not have that same willingness yet (she even mentions it at the bottom, and presumably she knows more about how other ex-Leveragers feel than we do).
I, um, don’t have anything coherent to say yet. Just a heads up. I also don’t really know where this comment should go.
But also I don’t really expect to end up with anything coherent to say, and it is quite often the case that when I have something to say, people find it worthwhile to hear my incoherence anyway, because it contains things that underlay their own confused thoughts, and after hearing it they are able to un-confuse some of those thoughts and start making sense themselves. Or something. And I do have something incoherent to say. So here we go.
I think there’s something wrong with the OP. I don’t know what it is, yet. I’m hoping someone else might be able to work it out, or to see whatever it is that’s causing me to say “something wrong” and then correctly identify it as whatever it actually is (possibly not “wrong” at all).
On the one hand, I feel familiarity in parts of your comment, Anna, about “matches my own experiences/observations/hearsay at and near MIRI and CFAR”. Yet when you say “sensible”, I feel, “no, the opposite of that”.
Even though I can pick out several specific places where Jessicata talked about concrete events (e.g. “I believed that I was intrinsically evil” and “[Michael Vassar] was commenting on social epistemology”), I nevertheless have this impression that I most naturally conceptualize as “this post contained no actual things”. While reading it, I felt like I was gazing into a lake that is suspended upside down in the sky, and trying to figure out whether the reflections I’m watching in its surface are treetops or low-hanging clouds. I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.
There’s something about nearly every phrase (and sentence, and paragraph, and section) here that I just, I just want to spit out, as though the phrase itself thinks it’s made of potato chunks but in fact, out of the corner of my eye, I can tell it is actually made out of a combination of upside-down cloud reflections and glass shards.
Let’s try looking at a particular, not-very-carefully-chosen sentence.
I have so many questions. “As a consequence” seems fine; maybe that really is potato chunks. But then, “the people most mentally concerned” happens, and I’m like, Which people were most mentally concerned? What does it mean to be mentally concerned? How could the author tell that those people were mentally concerned? Then we have “with strange social metaphysics”, and I want to know “what is social metaphysics?”, “what is it for social metaphysics to be strange or not strange?” and “what is it to be mentally concerned with strange social metaphysics”? Next is “were marginalized”. How were they marginalize? What caused the author to believe that they were marginalized? What is it for someone to be marginalized? And I’m going to stop there because it’s a long sentence and my reaction just goes on this way the whole time.
I recognize that it’s possible to ask this many questions of this kind about absolutely any sentence anyone has ever uttered. Nevertheless, I have a pretty strong feeling that this sentence calls for such questions, somehow, much more loudly than most sentences do. And the questions the sentences call for are rarely answered in the post. It’s like a tidal wave of… of whatever it is. More and more of these phrases-calling-for-questions pile up one after another, and there’s no time in between to figure out what’s going on, if you want to follow the post whatsoever.
There are definitely good things in here. A big part of my impression of the author, based on this post, is that they’re smart and insightful, and trying to make the world better. I just, also have this feeling like something… isn’t just wrong here, but is going wrong, and maybe the going has momentum, and I wonder how many readers will get temporarily trapped in the upside down mirror maze while thinking they’re eating potatoes, unless they slow way way down and help me figure out what on earth is happening in this post.
This matches my impression in a certain sense. Specifically, the density of gears in the post (elements that would reliably hold arguments together, confer local validity, or pin them to reality) is low. It’s a work of philosophy, not investigative journalism. So there is a lot of slack in shifting the narrative in any direction, which is dangerous for forming beliefs (as opposed to setting up new hypotheses), especially if done in a voice that is not your own. The narrative of the post is coherent and compelling, it’s a good jumping-off point for developing it into beliefs and contingency plans, but the post itself can’t be directly coerced into those things, and this epistemic status is not clearly associated with it.
How do you think Zoe’s post, or mainstream journalism about the rationalist community (e.g. Cade Metz’s article, perhaps there are other better ones I don’t know about) compare on this metric? Are there any examples of particularly good writeups about the community and its history you know about?
I’m not saying that the post isn’t good (I did say it’s coherent and compelling), and I’m not at this moment aware of something better on its topic (though my ability to remain aware of such things is low, so that doesn’t mean much). I’m saying specifically that gear density is low, so it’s less suitable for belief formation than hypothesis setup. This is relevant as a more technical formulation of what I’m guessing LoganStrohl is gesturing at.
I think investigative journalism is often terrible, as is philosophy, but the concepts are meaningful in characterizing types of content with respect to gear density, including high quality content.
I am intending this more as contribution of relevant information and initial models than firm conclusions; conclusions are easier to reach the more different relevant information and models are shared by different people, so I suppose I don’t have a strong disagreement here.
Sure, and this is clear to me as a practitioner of the yoga of taking in everything only as a hypothesis/narrative, mining it for gears, and separately checking what beliefs happen to crystallize out of this, if any. But for someone who doesn’t always make this distinction, not having a clear indication of the status of the source material needlessly increases epistemic hygiene risks, so it’s a good norm to make epistemic status of content more legible. My guess is that LoganStrohl’s impression is partly of violation of this norm (which I’m not even sure clearly happened), shared by a surprising number of upvoters.
Do you predict Logan’s comment would have been much different if I had written “[epistemic status: contents of memory banks, arranged in a parseable semicoherent narrative sequence, which contains initial models that seem to compress the experiences in a Solomonoff sense better than alternative explanations, but which aren’t intended to be final conclusions, given that only a small subset of the data has been revealed and better models are likely to be discovered in the future]”? I think this is to some degree implied by the title which starts with “My experience...” so I don’t think this would have made a large difference, although I can’t be sure about Logan’s counterfactual comment.
I’m not sure, but the hypothesis I’m chasing in this thread, intended as a plausible steelman of Logan’s comment, thinks so. One alternative that is also plausible to me is motivated cognition that would decry undesirable source material for low gear density, and that one predicts little change in response to more legible epistemic status.
I expect the alternative hypothesis to be true given the difference between the responses to this post and Zoe’s post.
If you are genuinely asking, I think cutting that down into something slightly less clinical sounding (because it sounds sarcastic when formalized) would probably take a little steam out of that type of opposition, yes.
This reads like you feel compelled to avoid parsing the content of the OP, and instead intend to treat the criticisms it makes as a Lovecraftian horror the mind mustn’t engage with. Attempts to interpret this sort of illegible intent-to-reject as though it were well-intentioned criticism end up looking like:
Very helpful to have a crisp example of this in text.
ETA: I blanked out the first few times I read Jessica’s post on anti-normativity, but interpreted that accurately as my own intent to reject the information rather projecting my rejection onto the post itself, treated that as a serious problem I wanted to address, and was able to parse it after several more attempts.
I understood the first sentence of your comment to be something like “one of my hypotheses about Logan’s reaction is that Logan has some internal mental pressure to not-parse or not-understand the content of what Jessica is trying to convey.”
That makes sense to me as a hypothesis, if I’ve understood you, though I’d be curious for some guesses as to why someone might have such an internal mental pressure, and what it would be trying to accomplish or protect.
I didn’t follow the rest of the comment, mostly due to various words like “this” and “it” having ambiguous referents. Would you be willing to try everything after “attempts” again, using 3x as many words?
Summary:
Logan reports a refusal to parse the content of the OP. Logan locates a problem nonspecifically in the OP, not in Logan’s specific reaction to it. This implies a belief that it would be bad to receive information from Jessica.
Logan reports a refusal to parse the content of the OP
Most of this isn’t even slightly ambiguous, and Jessica explains most of the things being asked about, with examples, in the body of the post.
Logan locates a nonspecific problem in the OP, not in Logan’s response to it.
This isn’t a description of a specific criticism or disagreement. This is a claim that the post is nonspecifically going to cause readers to become disoriented and trapped.
This implies a belief that it would be bad to receive information from Jessica.
If the objection isn’t that Jessica is mistaken but that she’s “going wrong,” that implies that the contents of Jessica’s mind are dangerous to interact with. This is the basic trope of Lovecraftian horror—that there are some real things the human mind can’t handle and therefore wants to avoid knowing. If something is dangerous, like nuclear waste or lions, we might want to contain it or otherwise keep it at a distance.
Since there’s no mechanism suggested, this looks like an essentializing claim. If the problem isn’t something specific that Jessica is doing or some specific transgression she’s committing, then maybe that means Jessica’s just intrinsically dangerous. Even if not, if Jessica were going to take this concern seriously, without a theory of how what she’s doing is harmful, she would have to treat all of her intentions as dangerous and self-contain.
In other words, she’d have to proceed as though she might be intrinsically evil (“isn’t just wrong here, but is going wrong, and maybe the going has momentum”), is in a hell of her own creation (“I felt like I was being invited into a mirror-maze that the author had been trapped in for… an unknown but very long amount of time.”), and ought to avoid taking actions, i.e. become catatonic.
I also don’t know what “social metaphysics” means.
I get the mood of the story. If you look at specific accusations, here is what I found, maybe I overlooked something:
This is like 5-10% of the text. A curious thing is that it is actually the remaining 90-95% of the text that evoke bad feelings in the reader; at least in my case.
To compare, when I was reading Zoe’s article, I was shocked by the described facts. When I was reading Jessica’s article, I was shocked by the horrible things that happened to her, but the facts felt… most of them boring… the most worrying part was about a group of people who decided that CFAR was evil, spent some time blogging against CFAR, then some of them killed themselves; which is very sad, but I fail to see how exactly CFAR is responsible for this, when it seems like the anti-CFAR group actually escalated the underlying problems to the point of suicide. (This reminds me of XiXiDu describing how fighting against MIRI causes him health problems; I feel bad about him having the problems, but I am not sure what MIRI could possibly do to stop this.)
Jessica’s narrative is that MIRI/CFAR is just like Leverage, except less transparent. Yet when she mentions specific details, it often goes somewhat like this: “Zoe mentioned that Leverage did X. CFAR does not do X, but I feel terrible anyway, so it is similar. Here is something vaguely analogical.” Like, how can you conclude that not doing something bad is even worse than doing it, because it is less transparent?! Of course it is less transparent if it, you know, actually does not exist.
Or maybe I’m tired and failing at reading comprehension. I wish someone would rewrite the article, to focus on the specific accusations against MIRI/CFAR, and remove all those analogies-except-not-really with Zoe; just make it a standalone list of specific accusations. Then let’s discuss that.
This comment was very helpful. Thank you.
Thanks for the expansion! Mulling.
Thanks for this articulate and vulnerable writeup. I do think we might all agree that the experience you are describing seems like a very good description of what somebody in a cult would go through while facing information that would trigger disillusionment.
I am not asserting you are in a cult, maybe I should use more delicate language, but in context I would like to point out this (to me) obvious parallel.
I feel like one really major component that is missing from the story above, in particular a number of the psychotic breaks, is to mention Michael Vassar and a bunch of the people he tends to hang out with. I don’t have a ton of detail on exactly what happened in each of the cases where someone seemed to have a really bad time, but having looked into it for a few hours in each case, I think all three of them were in pretty close proximity to having spent a bunch of time (and in some of the cases after taking psychedelic drugs) with Michael.
I think this is important because Michael has I think a very large psychological effect on people, and also has some bad tendencies to severely outgroup people who are not part of his very local social group, and also some history of attacking outsiders who behave in ways he doesn’t like very viciously, including making quite a lot of very concrete threats (things like “I hope you will be guillotined, and the social justice community will find you and track you down and destroy your life, after I do everything I can to send them onto you”). I personally have found those threats to very drastically increase the stress I experience from interfacing with Michael (and some others in his social group), and also my models of how these kinds of things happen have a lot to do with dynamics where this kind of punishment is expected if you deviate from the group norm.
I am not totally confident that Michael has played a big role in all of the bad psychotic experiences listed above, but my current best guess is that he has, and I do indeed pretty directly encourage people to not spend a lot of time with Michael (though I do think talking to him occasionally is actually great and I have learned a lot of useful things from talking to him, and also think he has helped me see various forms of corruption and bad behavior in my environment that I am genuinely grateful to have noticed, but I very strongly predict that I would have a very intensely bad experience if I were to spend more time around Michael, in a way I would not endorse in the long run).
Of the 4 hospitalizations and 1 case of jail time I know about, 3 of those hospitalized (including me) were talking significantly with Michael, and the others weren’t afaik (and neither were the 2 suicidal people), though obviously I couldn’t know about all conversations that were happening. Michael wasn’t talking much with Leverage people at the time.
I hadn’t heard of the statement about guillotines, that seems pretty intense.
I talked with someone recently who hadn’t been in the Berkeley scene specifically but who had heard that Michael was “mind-controlling” people into joining a cult, and decided to meet him in person, at which point he concluded that Michael was actually doing some of the unique interventions that could bring people out of cults, which often involves causing them to notice things they’re looking away from. It’s common for there to be intense psychological reactions to this (I’m not even thinking of the psychotic break as the main one, since that didn’t proximately involve Michael; there have been other conversations since then that have gotten pretty emotionally/psychologically intense), and that it’s common for people to not want to have such reactions, although clearly at least some people think they’re worth having for the value of learning new things.
IIRC the one case of jail time also had a substantial interaction with Michael relatively shortly before the psychotic break occurred. Though someone else might have better info here and should correct me if I am wrong. I don’t know of any 4th case, so I believe you that they didn’t have much to do with Michael. This makes the current record 4⁄5 to me, which sure seems pretty high.
I did not intend to indicate Michael had any effect on Leverage people, or to say that all or even a majority of the difficult psychological problems that people had in the community are downstream of Michael. I do think he had a large effect on some of the dynamics you are talking about in the OP, and I think any picture of what happened/is happening seems very incomplete without him and the associated social cluster.
I think the part about Michael helping people notice that they are in some kind of bad environment seems plausible to me, though doesn’t have most of my probability mass (~15%), and most of my probability mass (~60%) is indeed that Michael mostly just leverages the same mechanisms for building a pretty abusive and cult-like ingroup that are common, with some flavor of “but don’t you see that everyone else is completely crazy and evil” thrown into it.
I think it is indeed pretty common for abusive environments to start with “here is why your current environment is abusive in this subtle way, and that’s also why it’s OK for me to do these abusive-seeming things, because it’s not worse than anywhere else”. I think this was a really large fraction of what happened with Brent, and I also think a pretty large fraction of what happened with Leverage. I also think it’s a large fraction of what’s going on with Michael.
I do want to reiterate that I do assign substantial probability mass (~15%) to your proposed hypothesis being right, and am interested in more evidence for it.
I was pretty involved in that case after the arrest and for several months after and spoke to MV about it, and AFAICT that person and Michael Vassar only met maybe once casually. I think he did spend a lot of time with others in MV’s clique though.
Ah, yeah, my model is that the person had spent a lot of time with MV’s clique, though I wasn’t super confident they had talked to Michael in particular. Not sure whether I would still count this as being an effect of Michael’s actions, seems murkier than I made it out to be in my comment.
I think one of the ways of disambiguating here is to talk to people outside your social bubble, e.g. people who live in different places, people with different politics, people in different subcultures or on different websites (e.g. Twitter or Reddit), people you run into in different contexts, people who have had experience in different mainstream institutions (e.g. different academic departments, startups, mainstream corporations). Presumably, the more of a culty bubble you’re in, the more prediction error this will generate, and the harder it will be establish communication protocols across the gap. This establishes a point of comparison between people in bubble A vs B.
I spent a long part of the 2020 quarantine period with Michael and some friends of his (and friends of theirs) who were previously in a non-bay-area cult, which exposed me to a lot of new perspectives I didn’t know about (not just theirs, but also those of some prison reform advocates and religious people), and made Michael seem less extremal or insular in comparison, since I wasn’t just comparing him to the bubble of people who I already knew about.
Hmm, I’ve tried to read this comment for something like 5 minutes, but I can’t really figure out its logical structure. Let me give it a try in a more written format:
Presumably this is referring to distinguishing the hypothesis that Michael is kind of causing a bunch of cult-like problems, from the hypothesis that he helping people see problems that are actually present.
I don’t understand this part. Why would there be a monotonous relationship here? I agree with the bubble part, and while I expect there to be a vague correlation, it doesn’t feel like it measures anything like the core of what’s going on. I wouldn’t measure the cultishness of an economics department based on how good they are at talking to improv-students. It might still be good for them to get better at talking to improv students, but failure to do so doesn’t feel like particularly strong evidence to me (compared to other dimensions, like the degree to which they feel alienated from the rest of the world, or have psychotic breaks, or feel under a lot of social pressure to not speak out, or many other things that seem similarly straightforward to measure but feel like they get more at the core of the thing).
But also, I don’t understand how I am supposed to disambiguate things here? Like, maybe the hypothesis here is that by doing this myself I could understand how insular my own environment is? I do think that seems like a reasonable point of evidence, though I also think my experiences have been very different from people at MIRI or CFAR. I also generally don’t have a hard time establishing communication protocols across these kinds of gaps, as far as I can tell.
This is interesting, and definitely some evidence, and I appreciate you mentioning it.
If you think the anecdote I shared is evidence, it seems like you agree with my theory to some extent? Or maybe you have a different theory for how it’s relevant?
E.g. say you’re an econ student, and there’s this one person in the econ department who seems to have all these weird opinions about social behavior and think body language is unusually important. Then you go talk to some drama students and find that they have opinions that are even more extreme in the same direction. It seems like the update you should make is that you’re in a more insular social context than the person with opinions on social behavior, who originally seemed to you to be in a small bubble that wasn’t taking in a lot of relevant information.
(basically, a lot of what I’m asserting constitutes “being in a cult” is living in a simulation of an artificially small, closed world)
The update was more straightforward, based on “I looked at some things that are definitely cults, what Michael does seems less extremal and insular in comparison, therefore it seems less likely for Michael to run into the same problems”. I don’t think that update required agreeing with your theory to any substantial degree.
I do think your paragraph still clarified things a bit for me, though with my current understanding, presumably the group to compare yourself against are less cults, and more just like, average people who are somewhat further out on some interesting dimension. And if you notice that average people seem really crazy and cult-like to you, then I do think this is something to pay attention to (though like, average people are also really crazy on lots of topics, like schooling and death and economics and various COVID related things that I feel pretty confident in, and so I don’t think this is some kind of knockdown argument, though I do think having arrived at truths that large fractions of the population don’t believe definitely increase the risks from insularity).
I definitely don’t want to imply that agreement with the majority is a metric, rather the ability to have a discussion at all, to be able to see part of the world they’re seeing and take that information into account in your own view (which might be called “interpretive labor” or “active listening”).
Agree. I do think the two are often kind of entwined (like, I am not capable of holding arbitrarily many maps of the world in my mind at the same time, so when I arrive at some unconventional beliefs that has broad consequences, the new models based on that belief will often replace more conventional models of the domain, and I will have to spend time regenerating the more conventional models and beliefs in conversation with someone who doesn’t hold the unconventional belief, which does frequently make the conversation kind of harder, and I still don’t think is evidence of something going terribly wrong)
Oh, something that might not have been clear is that talking with other people Michael knows made it clear that Michael was less insular than MIRI/CFAR people (who would have been less able to talk with such a diverse group of people, afaict), not just that he was less insular than people in cults.
Do you know if the 3 people who were talking significantly with Michael did LSD at the time or with him?
Erm… feel free to keep plausible deniability. Taking LSD seems to me like a pretty worthwhile thing to do in lots of contexts and I’m willing to put a substantial amount of resources to defending against legal attacks (or supporting you in the face of them) that are caused by you replying openly here. (I don’t know if that’s plausible, I’ve not thought about it much so mentioned it anyway.)
I had taken a psychedelic previously with Michael; one other person probably had; the other probably hadn’t; I’m quite unsure of the latter two judgments. I’m not going to disambiguate about specific drugs.
What kinds of things was he attacking people for?
I am not fully sure. I have heard him say very similar things to the above directed at Holden (and have heard reports of the things I put in quotes above). I think in that case the thing he is attacking him for is leveraging people’s desire to be a morally good person in a way that they don’t endorse (and plays into various guilt narratives), to get them to give him money, and to get them to dedicate their life towards Effective Altruism, and via that technique, preventing a substantial fraction of the world’s top talent to dedicate themselves towards actually important problems, and also causing them various forms of psychological harm.
Do you have an idea of when those things were directed at Holden?
UPDATE: I mostly retract this comment. It was clarified that the threat was made in a mostly public context which changes the frame for me significantly.
I think it is problematic to post a presumably very private communication (the threat) to such a broad audience. Even when it is correctly attributed it lacks all the context of the situation it was uttered in. It lacks any amends that way or may not have been made and exposes many people to the dynamics of the narrative resulting from the posting here. I’m not saying you shouldn’t post it. I don’t know the context and what you know either. But I think you should take ownership of the consequences of citing it and anyway it might escalate from here (a norm proposed by Scott Adamsa while ago).I don’t think the context in which I heard about this communication was very private. There was a period where Michael seemed to try to get people to attack GiveWell and Holden quite loudly, and the above was part of the things I heard from that time. The above did not to me strike me as a statement intended to be very private, and also my model of Michael has norms that encourage sharing this kind of thing, even if it happens in private communication.
Thank you for the clarification. I think it is valuable to include this context in your comment.
I will adjust my comment accordingly.
Can somebody give me some hints according to which norms this could be downvoted?
I didn’t downvote, but I almost did because it seems like it’s hard enough to reveal that kind of thing without also having to worry about social disapproval.
Re: “this part of the plan was the same”: IMO, some at CFAR were interested in helping some subset of people become Elon Musk, but this is different from the idea that everyone is supposed to become Musk and that that is the plan. IME there was usually mostly (though not invariably, which I expect led to problems; and for all I know “usually” may also have been the case in various parts and years of Leverage) acceptance for folks who did not wish to try to change themselves much.
Yeah, I very strongly don’t endorse this as a description of CFAR’s activities or of CFAR’s goals, and I’m pretty surprised to hear that someone at CFAR said something like this (unless it was Val, in which case I’m less surprised).
Most of my probability mass is on the CFAR instructor was taking “become Elon Musk” to be a sort of generic, hyperbolic term for “become very capable.”
The person I asked was Duncan. I suggested the “Elon Musk” framing in the question. I didn’t mean it literally, I meant him as an archetypal example of an extremely capable person. That’s probably what was meant at Leverage too.
I do not doubt Jessica’s report here whatsoever.
I also have zero memory of this, and it is not the sort of sentiment I recall holding in any enduring fashion, or putting forth elsewhere.
I suspect I intended my reply pretty casually/metaphorically, and would have similarly answered “yes” if someone had asked me if we were trying to improve ourselves to become any number of shorthand examples of “happy, effective, capable, and sane.”
2016 Duncan apparently thought more of Elon Musk than 2021 Duncan does.
Related Tweet by Mason:
Okay, here goes the nitpicking...
I am confused, because I assumed that Kegan stages are typically used by people who believe they are superior to LW-style rationalists. You know, “the rationalists believe in objective reality, so they are at Kegan level 4, while I am a post-rationalist who respects deep wisdom and religion, so I am at Kegan level 5.”
Here are some examples of long-time LW posters who think Kegan stages are important:
Kaj Satala
G. Gordon Worley III
Malcolm Ocean
Though I can’t find an example of him posting on LessWrong, Ethan Dickinson is in the Berkeley rationality community and is mentioned here as introducing people to Kegan stages. There are multiple others, these are just the people who it was easy to find Internet evidence about.
There’s a lot of overlap in people posting about “rationalism” and “postrationalism”, it’s often a matter of self-identification rather than actual use of different methods to think, e.g. lots of “rationalists” are into meditation, lots of “postrationalists” use approximately Bayesian analysis when thinking about e.g. COVID. I have noticed that “rationalists” tend to think the “rationalist/postrationalist” distinction is more important than the “postrationalists” do; “postrationalists” are now on Twitter using vaguer terms like “ingroup” or “TCOT” (this corner of Twitter) for themselves.
I also mentioned a high amount of interaction between CFAR and Monastic Academic in the post.
To speak a little bit on the interaction between CFAR and MAPLE:
My understanding is that none of Anna, Val, Pete, Tim, Elizabeth, Jack, etc. (current or historic higher-ups at CFAR) had any substantial engagement with MAPLE. My sense is that Anna has spoken with MAPLE people a good bit in terms of total hours, but not at all a lot when compared with how many hours Anna spends speaking to all sorts of people all the time—much much less, for instance, than Anna has spoken to Leverage folks or CEA folks or LW folks.
I believe that Renshin Lee (née Lauren) began substantially engaging with MAPLE only after leaving their employment at CFAR, and drew no particular link between the two (i.e. was not saying “MAPLE is the obvious next step after CFAR” or anything like that, but rather was doing what was personally good for them).
I think mmmmaybe a couple other CFAR alumni or people-near-CFAR went to MAPLE for a meditation retreat or two? And wrote favorably about that, from the perspective of individuals? These (I think but do not know for sure) include people like Abram Demski and Qiaochu Yuan, and a small number of people from CFAR’s hundreds of workshop alumni, some of whom went on to engage with MAPLE more fully (Alex Flint, Herschel Schwartz).
But there was also strong pushback from CFAR staff alumni (me, Davis Kingsley) against MAPLE’s attempted marketing toward rationalists, and its claims of being an effective charity or genuine world-saving group. And there was never AFAIK a thing which fifty-or-more-out-of-a-hundred-people would describe as “a high amount of interaction” between the two orgs (no co-run events, no shared advertisements, no endorsements, no long ongoing back and forth conversations between members acting in their role as members, no trend of either group leaking members to the other group, no substantial exchange of models or perspectives, etc). I think it was much more “nodding respectfully to each other as we pass in the hallway” than “sitting down together at the lunch table.”
I could be wrong about this. I was sort of removed-from-the-loop of CFAR in late 2018/early 2019. It’s possible there was substantial memetic exchange and cooperation after that point.
But up until that point, there were definitely no substantive interactions, and nothing ever made its way to my ears in 2019 or 2020 that made me think that had changed.
I’m definitely open to people showing me I’m wrong, here, but given my current state of knowledge the claim of “high interaction between CFAR and Monastic Academy” is just false.
(Where it would feel true to claim high interaction between CFAR and MIRI, or CFAR and LW, or CFAR and CEA, or CFAR and SPARC, or even CFAR and Leverage. The least of these is, as far as I can tell, an order of magnitude more substantial than the interaction between CFAR and MAPLE.)
This is Ren, and I was like ”?!?” by the sentence in the post: “There is a significant degree of overlap between people who worked with or at CFAR and people at the Monastic Academy.”
I am having trouble engaging with LW comments in general so thankfully Duncan is here with #somefacts. I pretty much agree with his list of informative facts.
More facts:
Adom / Quincy did a two-month apprenticeship at MAPLE, a couple years after being employed by CFAR. He and I are the only CFAR employees who’ve trained at MAPLE.
CFAR-adjacent people visit MAPLE sometimes, maybe for about a week in length.
Some CFAR workshop alums have trained at MAPLE or Oak as apprentices or residents, but I would largely not call them “people who worked with or at CFAR.” There are a lot of CFAR alums, and there are also a lot of MAPLE alums.
MAPLE and Oak have applied for EA grants in the past, which have resulted in them communicating with some CFAR-y type people like Anna Salamon, but this does not feel like a central example of “interaction” of the kind implied.
The inferential gap between the MAPLE and rationalist worldview is pretty large. There’s definitely an interesting “thing” about ex-CFAR staff turning to trad religion that you might want to squint at (I am one example, out of, I believe, three total), but I don’t like the way the OP tacks this sentence onto a section as though it were some kind of argument or evidence for some vague something something. And I think that’s why my reaction was ”?!?” and not just “hmm.”
But also, I cannot deny that the intuition jessicata has about MAPLE is not entirely off either. It gives off the same smells. But I still don’t like the placement of the sentence in the OP because I think it assumes too much.
Thanks, this adds helpful details. I’ve linked this comment in the OP.
As someone who was more involved with CFAR than Duncan was from in 2019 on, all this sounds correct to me.
I was also planning to leave a comment with a similar take.
FWIW I wouldn’t necessarily say that Kegan stages are important—they seem like an interesting model in part because they feel like they map quite well to some of the ways in which my own thought has changed over time. But I still only consider them to be at the level of “this is an interesting and intuitively plausible model”; there hasn’t been enough research on them to convincingly show that they’d be valid in the general population as well.
There was a period in something like 2016-2017 some rationalists were playing around with Kegan stages in the Bay Area. Most people I knew weren’t a huge fan of them, though the then-ED of CFAR (Pete Michaud) did have a tendency of bringing them up from time to time in a way I found quite annoying. It was a model a few people used from time to time, though my sense is that it never got much traction in the community. The “often” in the above quoted sentence definitely feels surprising to me, though I don’t know how many people at MIRI were using them at the time, and maybe it was more than in the rest of my social circle at time. I still hear them brought up sometimes, but usually in a pretty subdued way, more referencing the general idea of people being able to place themselves in a broader context, but in a much less concrete and less totalizing way than the way I saw them being used in 2016-2017.
I was very peripheral to the Bay Area rationality at that time and I heard about Kegan levels enough to rub me the wrong way. Seemed bizarre to me that one man’s idiosyncratic theory of development would be taken so seriously by a community I generally thought was more discerning. That’s why I remember so clearly that it came up many times.
+1, except I was more physically and maybe socially close.
FWIW I think this understates the influence of Kegan levels. I don’t know how much people did differently because of it, which is maybe what you’re pointing at, but it was definitely a thing people had heard of and expected other people to have heard of and some people targeted directly.
Huh, some chance I am just wrong here, but to me it didn’t feel like Kegan levels had more prominence or expectation of being understood than e.g. land value taxes, which is also a topic some people are really into, but doesn’t feel to me like it’s very core to the community.
Datapoint: I understand neither Kegan levels nor land value taxes.