I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he’s “causing psychotic breaks” and “jailbreaking people” through conversation, “that listening too much to Vassar [causes psychosis], predictably”) isn’t obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of “cult”. Not just the sort of weak cults we’re discussing here, but even the really strong cults like Heaven’s Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven’s Gate is bad for them, and leave. When we use the word “cult”, we’re implicitly agreeing that this doesn’t always work, and we’re bringing in creepier and less comprehensible ideas like “charisma” and “brainwashing” and “cognitive dissonance”.
(and the same thing with the concept of “emotionally abusive relationship”)
I don’t want to call the Vassarites a cult because I’m sure someone will confront me with a Cult Checklist that they don’t meet, but I think that it’s not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it’s weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I’m sure the drugs helped.
I think believing cults are possible is different in degree if not in kind from Leverage “doing seances...to call on demonic energies and use their power to affect the practitioners’ social standing”. I’m claiming, though I can’t prove it, that what I’m saying is more towards the “believing cults are possible” side.
I’m actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say “Oh, he’s in a cult, we need to kidnap and deprogram him since his best self wouldn’t agree with the deconversion.” I want to be extremely careful in when we do things like that, which is why I’m not actually “calling for isolating Michael Vassar from his friends”. I think in the Outside View we should almost never do this!
But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn’t just ignore.
It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that’s bad for them.
That’s very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.
A cult in it’s nature is a social institution and not just a meme that someone can pass around via having a few conversations.
I think “mind virus” is fair. Vassar spoke a lot about how the world as it is can’t be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny.
The thing with “bad influence” is that it’s a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.
The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.
Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.
Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.
It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.
Let’s consider a disjunction: 1: There isn’t a big effect here, 2: There is a big effect here.
In case 1:
It might make sense to discourage people from talking too much about “charisma”, “auras”, “mental objects”, etc, since they’re pretty fake, really not the primary factors to think about when modeling society.
The main problem with the relevant discussions at Leverage is that they’re making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
The case made against Michael, that he can “cause psychotic breaks” by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it’s basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
There isn’t a significant falsification of liberal individualism.
In case 2:
Since there’s a big effect, it makes sense to spend a lot of energy speculating on “charisma”, “auras”, “mental objects”, and similar hypotheses. “Charisma” has fewer details than “auras” which has fewer details than “mental objects”; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they’re (in expectation) moving in the direction of clarifying the phenomenon. We shouldn’t just say “charisma” and leave it at that, it’s so important that we need more details/gears.
Leverage’s claims about weird mind powers are to some degree plausible, there’s a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a “mental objects” claim).
There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.
(You could make a spectrum or expand the number of dimensions here, I’m starting with a binary here to make the poles obvious)
It seems like you haven’t expressed a strong belief whether we’re in case 1 or case 2. Some things you’ve said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, “cults” being real and actually somewhat bad for liberalism to admit the existence of, “charisma” being a big important thing).
I’m left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you’re assigning low value to investigating the details of this very important variable.
(I myself still have a lot of uncertainty here; I’ve had the impression of subtle mental influence happening from time to time but it’s hard to disambiguate what’s actually happening, and how strong the effect is. I think a lot of what’s going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others’ synchronization behavior to have weird/unexpected effects.)
I agree I’m being somewhat inconsistent, I’d rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I’m trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you’re open to that.
This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.
If it’s reasonable to worry about the .01%, it’s reasonable to ask how the ability varies. There’s some reason, some mechanism. This is worth discussing even if it’s hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering “body workers” who are extremely good at e.g. causing mental effects by touching people’s back a little; these people could easily be extremal, and Leverage people learned from them. I’ve had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, “oh, I just did an implicit channel thing, maybe you felt that”), I’ve never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be “placebo” in a way that makes it ultimately not that important but still, if we’re admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.
Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than “charisma” is still quite important.
One important implication of “cults are possible” is that many normal-seeming people are already too crazy to function as free citizens of a republic.
In other words, from a liberal perspective, someone who can’t make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren’t competent to make their own life decisions. They’re already not free, but in the grip of whatever attractor they found first.
Personally I bite the bullet and admit that I’m not living in a society adequate to support liberal democracy, but instead something more like what Plato’s Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I’d very much like to, someday.
I think there are less extreme positions here. Like “competent adults can make their own decisions, but they can’t if they become too addicted to certain substances.” I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
competent adults can make their own decisions, but they can’t if they become too addicted to certain substances
I think the principled liberal perspective on this is Bryan Caplan’s: drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
I don’t think that many people are “fundamentally incapable of being free.” But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.
The claim that someone is dangerous enough that they should be kept away from “vulnerable people” is a declaration of intent to deny “vulnerable people” freedom of association for their own good. (No one here thinks that a group of people who don’t like Michael Vassar shouldn’t be allowed to get together without him.)
drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I really don’t think this is an accurate description of what is going on in people’s mind when they are experiencing drug dependencies. I’ve spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so.
Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it’s a pretty bad model of people’s preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.
This seems like some evidence that the principled liberal position is false—specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.
Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn’t even realize he had a problem.
He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.
This is more-or-less Aristotle’s defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this: - I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. - I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments. It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
“Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
Hi CEA,
On https://www.centreforeffectivealtruism.org/our-mistakes I see “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable.”
Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.
I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he’s “causing psychotic breaks” and “jailbreaking people” through conversation, “that listening too much to Vassar [causes psychosis], predictably”) isn’t obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of “cult”. Not just the sort of weak cults we’re discussing here, but even the really strong cults like Heaven’s Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven’s Gate is bad for them, and leave. When we use the word “cult”, we’re implicitly agreeing that this doesn’t always work, and we’re bringing in creepier and less comprehensible ideas like “charisma” and “brainwashing” and “cognitive dissonance”.
(and the same thing with the concept of “emotionally abusive relationship”)
I don’t want to call the Vassarites a cult because I’m sure someone will confront me with a Cult Checklist that they don’t meet, but I think that it’s not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it’s weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I’m sure the drugs helped.
I think believing cults are possible is different in degree if not in kind from Leverage “doing seances...to call on demonic energies and use their power to affect the practitioners’ social standing”. I’m claiming, though I can’t prove it, that what I’m saying is more towards the “believing cults are possible” side.
I’m actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say “Oh, he’s in a cult, we need to kidnap and deprogram him since his best self wouldn’t agree with the deconversion.” I want to be extremely careful in when we do things like that, which is why I’m not actually “calling for isolating Michael Vassar from his friends”. I think in the Outside View we should almost never do this!
But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn’t just ignore.
It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that’s bad for them.
That’s very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.
A cult in it’s nature is a social institution and not just a meme that someone can pass around via having a few conversations.
Perhaps the proper word here might be “manipulation” or “bad influence”.
I think “mind virus” is fair. Vassar spoke a lot about how the world as it is can’t be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny.
The thing with “bad influence” is that it’s a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.
The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.
Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.
Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.
It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.
Let’s consider a disjunction: 1: There isn’t a big effect here, 2: There is a big effect here.
In case 1:
It might make sense to discourage people from talking too much about “charisma”, “auras”, “mental objects”, etc, since they’re pretty fake, really not the primary factors to think about when modeling society.
The main problem with the relevant discussions at Leverage is that they’re making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
The case made against Michael, that he can “cause psychotic breaks” by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it’s basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
There isn’t a significant falsification of liberal individualism.
In case 2:
Since there’s a big effect, it makes sense to spend a lot of energy speculating on “charisma”, “auras”, “mental objects”, and similar hypotheses. “Charisma” has fewer details than “auras” which has fewer details than “mental objects”; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they’re (in expectation) moving in the direction of clarifying the phenomenon. We shouldn’t just say “charisma” and leave it at that, it’s so important that we need more details/gears.
Leverage’s claims about weird mind powers are to some degree plausible, there’s a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a “mental objects” claim).
There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.
(You could make a spectrum or expand the number of dimensions here, I’m starting with a binary here to make the poles obvious)
It seems like you haven’t expressed a strong belief whether we’re in case 1 or case 2. Some things you’ve said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, “cults” being real and actually somewhat bad for liberalism to admit the existence of, “charisma” being a big important thing).
I’m left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you’re assigning low value to investigating the details of this very important variable.
(I myself still have a lot of uncertainty here; I’ve had the impression of subtle mental influence happening from time to time but it’s hard to disambiguate what’s actually happening, and how strong the effect is. I think a lot of what’s going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others’ synchronization behavior to have weird/unexpected effects.)
I agree I’m being somewhat inconsistent, I’d rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I’m trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you’re open to that.
Yes, I’d be open to answering email questions.
This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.
If it’s reasonable to worry about the .01%, it’s reasonable to ask how the ability varies. There’s some reason, some mechanism. This is worth discussing even if it’s hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering “body workers” who are extremely good at e.g. causing mental effects by touching people’s back a little; these people could easily be extremal, and Leverage people learned from them. I’ve had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, “oh, I just did an implicit channel thing, maybe you felt that”), I’ve never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be “placebo” in a way that makes it ultimately not that important but still, if we’re admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.
Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than “charisma” is still quite important.
One important implication of “cults are possible” is that many normal-seeming people are already too crazy to function as free citizens of a republic.
In other words, from a liberal perspective, someone who can’t make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren’t competent to make their own life decisions. They’re already not free, but in the grip of whatever attractor they found first.
Personally I bite the bullet and admit that I’m not living in a society adequate to support liberal democracy, but instead something more like what Plato’s Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I’d very much like to, someday.
I think there are less extreme positions here. Like “competent adults can make their own decisions, but they can’t if they become too addicted to certain substances.” I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
I think the principled liberal perspective on this is Bryan Caplan’s: drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I don’t think that many people are “fundamentally incapable of being free.” But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.
The claim that someone is dangerous enough that they should be kept away from “vulnerable people” is a declaration of intent to deny “vulnerable people” freedom of association for their own good. (No one here thinks that a group of people who don’t like Michael Vassar shouldn’t be allowed to get together without him.)
I really don’t think this is an accurate description of what is going on in people’s mind when they are experiencing drug dependencies. I’ve spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so.
Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it’s a pretty bad model of people’s preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.
This seems like some evidence that the principled liberal position is false—specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.
Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
https://en.wikipedia.org/wiki/Olivier_Ameisen
A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn’t even realize he had a problem.
He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.
This is more-or-less Aristotle’s defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
*As far as I know I didn’t know any such people before 2020; it’s very easy for members of the educated class to mistake our bubble for statistical normality.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this:
- I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity.
- I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments.
It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
He argued
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
Link? I’m not finding it
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9
I think what you’re pointing to is:
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN
(“we’re working on a couple of updates to the mistakes page, including about this”)
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.