I want to add some context I think is important to this.
Jessica was (I don’t know if she still is) part of a group centered around a person named Vassar, informally dubbed “the Vassarites”. Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to “jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.
Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don’t think he thinks they’re worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it’s especially galling that they’re just as bad). Since then, he’s tried to “jailbreak” a lot of people associated with MIRI and CFAR—again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird”). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.
(I am a psychiatrist and obviously biased here)
Jessica talks about a cluster of psychoses from 2017 − 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were “in the social circle” in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.
I don’t have hard evidence of all these points, but I think Jessica’s text kind of obliquely confirms some of them. She writes:
“Psychosis” doesn’t have to be a bad thing, even if it usually is in our society; it can be an exploration of perceptions and possibilities not before imagined, in a supportive environment that helps the subject to navigate reality in a new way; some of R.D. Liang’s work is relevant here, describing psychotic mental states as a result of ontological insecurity following from an internal division of the self at a previous time. Despite the witch hunts and so on, the Leverage environment seems more supportive than what I had access to. The people at Leverage I talk to, who have had some of these unusual experiences, often have a highly exploratory attitude to the subtle mental realm, having gained access to a new cognitive domain through the experience, even if it was traumatizing.
RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how “the light [begins] to break through the cracks in our all-too-closed minds”. He opposed schizophrenics taking medication, and advocated treatments like “rebirthing therapy” where people role-play fetuses going through the birth canal—for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole “actually psychosis is just people being enlightened as to the true nature of society” thing. I think Laing was wrong, psychosis is actually bad, and that the “actually psychosis is good sometimes” mindset is extremely related to the Vassarites causing all of these cases of psychosis.
Unless there were psychiatric institutionalizations or jail time resulting from the Leverage psychosis, I infer that Leverage overall handled their metaphysical weirdness better than the MIRI/CFAR adjacent community. While in Leverage the possibility of subtle psychological influence between people was discussed relatively openly, around MIRI/CFAR it was discussed covertly, with people being told they were crazy for believing it might be possible. (I noted at the time that there might be a sense in which different people have “auras” in a way that is not less inherently rigorous than the way in which different people have “charisma”, and I feared this type of comment would cause people to say I was crazy.) As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.
On the two cases of suicide, Jessica writes:
Both these cases are associated with a subgroup splitting off of the CFAR-centric rationality community due to its perceived corruption, centered around Ziz. (I also thought CFAR was pretty corrupt at the time, and I also attempted to split off another group when attempts at communication with CFAR failed; I don’t think this judgment was in error, though many of the following actions were; the splinter group seems to have selected for high scrupulosity and not attenuated its mental impact.)
Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn’t a coincidence—Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here’s an excerpt from Ziz’s blog on her experience (edited heavily for length, and slightly to protect the innocent):
When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.
[Vassar explained how] across society, the forces of gaslighting were attacking people’s basic ability to think and to a justice as a Schelling point until only the built-in Schelling points of gender and race remained, Vassar listed fronts in the war on gaslighting, disputes in the community, and included [local community member ZD] [...] ZD said Vassar broke them out of a mental hospital. I didn’t ask them how. But I considered that both badass and heroic. From what I hear, ZD was, probably as with most, imprisoned for no good reason, in some despicable act of, “get that unsightly person not playing along with the [heavily DRM’d] game we’ve called sanity out of my free world”.
I heard [local community member AM] was Vassar’s former “apprentice”. And I had started picking up jailbroken wisdom from them secondhand without knowing where it was from. But Vassar did it better. After Rationalist Fleet, I concluded I was probably worth Vassar’s time to talk to a bit, and I emailed him, carefully briefly stating my qualifications, in terms of ability to take ideas seriously and learn from him, so that he could get maximally dense VOI on whether to talk to me. A long conversation ensued. And I got a lot from it. [...]
Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve. And didn’t detransition. This all created an awful tension in me. The rationality community was kind of compromised as a rallying point for truthseeking. This was desperately bad for the world. [Vassar] was at the center of, largely the creator of a “no actually for real” rallying point for the jailbroken reality-not-social-reality version of this.
Ziz is describing the same cluster of psychoses Jessica is (including Jessica’s own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.
What was the community’s response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don’t know if it’s true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don’t fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything’s kind of been frozen in place since then.
I want to clarify that I don’t dislike Vassar, he’s actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He’s also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don’t think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of “the world is corrupt and traumatizing” which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people. My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We’re still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were—it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.
Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC
Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I’ll try to explain some context for the record.
In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme “trans women are [psychologically] men”. I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.
I don’t think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn’t describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant’ think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.
I wouldn’t say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don’t think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).
Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I’m not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.
I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he’s “causing psychotic breaks” and “jailbreaking people” through conversation, “that listening too much to Vassar [causes psychosis], predictably”) isn’t obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of “cult”. Not just the sort of weak cults we’re discussing here, but even the really strong cults like Heaven’s Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven’s Gate is bad for them, and leave. When we use the word “cult”, we’re implicitly agreeing that this doesn’t always work, and we’re bringing in creepier and less comprehensible ideas like “charisma” and “brainwashing” and “cognitive dissonance”.
(and the same thing with the concept of “emotionally abusive relationship”)
I don’t want to call the Vassarites a cult because I’m sure someone will confront me with a Cult Checklist that they don’t meet, but I think that it’s not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it’s weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I’m sure the drugs helped.
I think believing cults are possible is different in degree if not in kind from Leverage “doing seances...to call on demonic energies and use their power to affect the practitioners’ social standing”. I’m claiming, though I can’t prove it, that what I’m saying is more towards the “believing cults are possible” side.
I’m actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say “Oh, he’s in a cult, we need to kidnap and deprogram him since his best self wouldn’t agree with the deconversion.” I want to be extremely careful in when we do things like that, which is why I’m not actually “calling for isolating Michael Vassar from his friends”. I think in the Outside View we should almost never do this!
But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn’t just ignore.
It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that’s bad for them.
That’s very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.
A cult in it’s nature is a social institution and not just a meme that someone can pass around via having a few conversations.
I think “mind virus” is fair. Vassar spoke a lot about how the world as it is can’t be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny.
The thing with “bad influence” is that it’s a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.
The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.
Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.
Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.
It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.
Let’s consider a disjunction: 1: There isn’t a big effect here, 2: There is a big effect here.
In case 1:
It might make sense to discourage people from talking too much about “charisma”, “auras”, “mental objects”, etc, since they’re pretty fake, really not the primary factors to think about when modeling society.
The main problem with the relevant discussions at Leverage is that they’re making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
The case made against Michael, that he can “cause psychotic breaks” by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it’s basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
There isn’t a significant falsification of liberal individualism.
In case 2:
Since there’s a big effect, it makes sense to spend a lot of energy speculating on “charisma”, “auras”, “mental objects”, and similar hypotheses. “Charisma” has fewer details than “auras” which has fewer details than “mental objects”; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they’re (in expectation) moving in the direction of clarifying the phenomenon. We shouldn’t just say “charisma” and leave it at that, it’s so important that we need more details/gears.
Leverage’s claims about weird mind powers are to some degree plausible, there’s a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a “mental objects” claim).
There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.
(You could make a spectrum or expand the number of dimensions here, I’m starting with a binary here to make the poles obvious)
It seems like you haven’t expressed a strong belief whether we’re in case 1 or case 2. Some things you’ve said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, “cults” being real and actually somewhat bad for liberalism to admit the existence of, “charisma” being a big important thing).
I’m left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you’re assigning low value to investigating the details of this very important variable.
(I myself still have a lot of uncertainty here; I’ve had the impression of subtle mental influence happening from time to time but it’s hard to disambiguate what’s actually happening, and how strong the effect is. I think a lot of what’s going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others’ synchronization behavior to have weird/unexpected effects.)
I agree I’m being somewhat inconsistent, I’d rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I’m trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you’re open to that.
This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.
If it’s reasonable to worry about the .01%, it’s reasonable to ask how the ability varies. There’s some reason, some mechanism. This is worth discussing even if it’s hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering “body workers” who are extremely good at e.g. causing mental effects by touching people’s back a little; these people could easily be extremal, and Leverage people learned from them. I’ve had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, “oh, I just did an implicit channel thing, maybe you felt that”), I’ve never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be “placebo” in a way that makes it ultimately not that important but still, if we’re admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.
Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than “charisma” is still quite important.
One important implication of “cults are possible” is that many normal-seeming people are already too crazy to function as free citizens of a republic.
In other words, from a liberal perspective, someone who can’t make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren’t competent to make their own life decisions. They’re already not free, but in the grip of whatever attractor they found first.
Personally I bite the bullet and admit that I’m not living in a society adequate to support liberal democracy, but instead something more like what Plato’s Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I’d very much like to, someday.
I think there are less extreme positions here. Like “competent adults can make their own decisions, but they can’t if they become too addicted to certain substances.” I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
competent adults can make their own decisions, but they can’t if they become too addicted to certain substances
I think the principled liberal perspective on this is Bryan Caplan’s: drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
I don’t think that many people are “fundamentally incapable of being free.” But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.
The claim that someone is dangerous enough that they should be kept away from “vulnerable people” is a declaration of intent to deny “vulnerable people” freedom of association for their own good. (No one here thinks that a group of people who don’t like Michael Vassar shouldn’t be allowed to get together without him.)
drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I really don’t think this is an accurate description of what is going on in people’s mind when they are experiencing drug dependencies. I’ve spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so.
Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it’s a pretty bad model of people’s preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.
This seems like some evidence that the principled liberal position is false—specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.
Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn’t even realize he had a problem.
He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.
This is more-or-less Aristotle’s defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this: - I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. - I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments. It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
I have a large number of negative Leverage experiences between 2015-2017 that I never wrote up due to various complicated adversarial dynamics surrounding Leverage and CEA (as well as various NDAs and legal threats, made by both Leverage and CEA, not leveled at me, but leveled at enough people around me that I thought I might cause someone serious legal trouble if I repeat a thing I heard somewhere in a more public setting)
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
“Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
Hi CEA,
On https://www.centreforeffectivealtruism.org/our-mistakes I see “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers. We realized during and after the program that senior management did not provide enough oversight of the program. For example, reports by some applicants indicate that the interview process was unprofessional and made them deeply uncomfortable.”
Is there a reason that the mistakes page does not mention the involvement of Leverage in the Pareto Fellowship? [1]
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.
I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don’t think you’re being fair.
“jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself)
I’m confident this is only a Ziz-ism: I don’t recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.
again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird”)
I can’t speak for Michael or his friends, and I don’t want to derail the thread by going into the details of my own situation. (That’s a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there’s a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it’s a cult), having a mental breakdown is an understandable reaction. It’s not that mental breakdowns are in any way good—in a saner world, that wouldn’t happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a “deep emotional break with the wisdom of [your] pack”, the mental breakdown might actually be less bad in the long run, even if it’s locally extremely bad.
My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)
ZD said Vassar broke them out of a mental hospital. I didn’t ask them how.
(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn’t come—but I was discharged normally; he didn’t bust me out.)
I don’t want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn’t harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I’m suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their “it’s correct to be freaking about learning your entire society is corrupt and gaslighting” shtick.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
[...]
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I more or less Outside View agree with you on this, which is why I don’t go around making call-out threads or demanding people ban Michael from the community or anything like that (I’m only talking about it now because I feel like it’s fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) “This guy makes people psychotic by talking to them” is a silly accusation to go around making, and I hate that I have to do it!
But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.
I think the minimum viable narrative here is, as you say, something like “Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs.” Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can’t trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the “he’s just having normal truth-seeking conversation” objection. He also seems really good at pushing trans people’s buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don’t know how it happens, I’m sufficiently embarrassed to be upset about something which looks like “having a nice interesting conversation” from the outside, and I don’t want to violate liberal norms that you’re allowed to have conversations—but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.
Maybe one analogy would be people with serial emotional abusive relationships—should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you’ve got to at least leave that possibility open for when things get really weird.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can’t remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, “Huh, I need to only be convinced by true things.”
This is extremely relatable to my lived experience. I am a stereotypical “high-functioning autist.” I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.
To the degree that “rationality styles” are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.
Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.
Thing 1:
Imagine two world models:
Some people want to act as perfect nth-order cooperating utilitarians, but can’t because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: “Stop rationalizing.” Then the humans revert to the all-consuming anguish.
A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.
Which of these world models is correct? Both, obviously, because we’re all smart people here and understand the Machiavellian Intelligence Hypothesis.
Thing 2:
Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)
You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?
Ignore him. This is good for AI-box reasons, but bad because you don’t learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.
Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.
1a. Precommit to only talk with him if he castrates himself first.
1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.
I made those in 1 minute of actually trying.
Returning to the object level, let us consider Michael Vassar.
Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.
1a. Vassar can participate but will be shunned if he talks about “drama” in the rationality community or its social structure.
1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.
2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry.
I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!
I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?
The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn’t rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.
You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don’t we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.
“Diversity of thought is good.”
“I have a diverse opinion on the merits of vaccination.”
“Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence.”
“When does diversity of thought lead to coercion or violence?”
“When I, or the WHO, say so. Shut up, prole.”
This is actually quite a few skulls, but everything has quite a few skulls. People die very often.
Thing 3:
Now let me address a counterargument:
Argument 1: “Vassar’s belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory.”
Here’s the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.
Argument 2: “The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They ‘logically deduce’ the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people’s current behavior and coerce them into giving up their agency.”
There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of “traditional living/wisdom” are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)
There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. “In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition.”
THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See “A formalist manifesto” by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of “legitimate information” or “self-locating information” to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])
The only real social epistemologies are of the form:
“Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence).”
Mine is particular is, “Free speech but no (intentionally and directly inciting panic or violence using falsehoods).”
To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off.
Thing 4:
Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.
Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz’s blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.
MIRI payed out to blackmail. There’s an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn’t actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I’m sorry but it’s true, anyways please write Arcane Ascension book 4.)
I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.
He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)
I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not as a club member.
What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).
Now I am significantly happier, more agentic, and more rational.
Thing 5
When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn’t supposed to be easy. Have you seen mathematical logic? (It’s my favorite field).
An example of an important idea that may come from Vassar, but is likely much older:
Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who “matter.” Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.
Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.
However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.
Benjamin Ross Hoffman’s blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.
Thing 6:
I’m almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.
Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.
These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called “actually listening to arguments.” When I’m debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.
Uh, thanks for reading, I hope this was coherent, have a nice day.
One note though: I think this post (along with most of the comments) isn’t treating Vassar as a fully real person with real choices. It (also) treats him like some kind of ‘force in the world’ or ‘immovable object’. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I’m glad you yourself were able to “With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life.”
But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are.
I think it’s pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that’s in his capacity, which I think is a lot.
“Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane.”
I might think this was a worthwhile tradeoff if I actually believed the ‘maybe insane’ part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn’t want to offer this mental training himself; that isn’t much of an excuse, in my book, to target people who are ‘close to the edge’ (where ‘edge’ might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them.
His service is well-intentioned, but he’s not doing it wisely and compassionately, as far as I can tell.
I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.
In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…
I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models.
If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren’t typical for the threat.
I am not sure how much ‘not destabilize people’ is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of “you are expected to behave better for status reasons look at my smug language”-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.
In the pathological case of Vassar, I think the naive strategy of “just say the thing you think is true” is still correct.
Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The “maybe insane” part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.
My suggestion for Vassar is not to ‘try not to destabilize people’ exactly.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking “at” rather than talking “to” or “with”. The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things.
I expect this process could take a long time / run into issues along the way, and so I don’t think it should be rushed. Not expecting a quick change. But claiming there’s no available option seems wildly wrong to me. People aren’t fixed points and generally shouldn’t be treated as such.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher’s information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into).
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he’s pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he’s speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.
Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people.
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.
As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.
If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can’t be disassociated anymore, that’s very predicably going to have a negative effect on that prison guard.
Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard.
I think this line of discussion would be well served by marking a natural boundary in the cluster “crazy.” Instead of saying “Vassar can drive people crazy” I’d rather taboo “crazy” and say:
Many people are using their verbal idea-tracking ability to implement a coalitional strategy instead of efficiently compressing external reality. Some such people will experience their strategy as invalidated by conversations with Vassar, since he’ll point out ways their stories don’t add up. A common response to invalidation is to submit to the invalidator by adopting the invalidator’s story. Since Vassar’s words aren’t selected to be a valid coalitional strategy instruction set, attempting to submit to him will often result in attempting obviously maladaptive coalitional strategies.
People using their verbal idea-tracking ability to implement a coalitional strategy cannot give informed consent to conversations with Vassar, because in a deep sense they cannot be informed of things through verbal descriptions, and the risk is one that cannot be described without the recursive capacity of descriptive language.
Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it’s desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles’ reproductive cycle by resembling the moon too much.
EDIT: Ben is correct to say we should taboo “crazy.”
This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought. (entirely wrong)
I also don’t think people interpret Vassar’s words as a strategy and implement incoherence. Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don’t know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)
The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.
Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away.
What specific claims turned out to be false? What counterevidence did you encounter?
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person’s language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Specific claim: this is how to take over New York.
Didn’t work.
I think this needs to be broken up into 2 claims:
1 If we execute strategy X, we’ll take over New York.
2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.
2 has been falsified decisively. The plan to recruit candidates via appealing to people’s explicit incentives failed, there wasn’t a good alternative, and as a result there wasn’t a chance to test other parts of the plan (1).
That’s important info and worth learning from in a principled way. Definitely I won’t try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they’re already doing this, as long as I don’t have to count on other unknown people acting similarly in the future.
But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, “see? novel multi-step plans don’t work!” extremely annoying. I’ve been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of “we / someone else decided not to try” as a different kind of failure from “we tried and it didn’t work out.”
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
This seems to be conflating the question of “is it possible to construct a difficult problem?” with the question of “what’s the rate-limiting problem?”. If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I’d very much like to hear the details. If I’m persuaded I’ll be interested in figuring out how to help.
So far this seems like evidence to the contrary, though, as it doesn’t look like you thought you could get help making things better for many people by explaining the opportunity.
To the extent I’m worried about Vassar’s character, I am as equally worried about the people around him. It’s the people around him who should also take responsibility for his well-being and his moral behavior. That’s what friends are for. I’m not putting this all on him. To be clear.
I think it’s a fine way of think about mathematical logic, but if you try to think this way about reality, you’ll end up with views that make internal sense and are self-reinforcing but don’t follow the grain of facts at all. When you hear such views from someone else, it’s a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: “this thing about power was true in 10th century Byzantium, but not clear how much of it applies today”.
Also, just to comment on this:
It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.
I think it’s somewhat changeable. Even for people like us, there are ways to make our processing more “fuzzy”. Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the “warm fuzzy” level, it’s not nearly so cold a place as it seems, and plugging into that market is so worth it.
I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)
Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See “Safety in numbers” by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)
I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.
I sometimes round things, it is not inherently bad.
Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.
On the second paragraph:
This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.
Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is “this is true everywhere and false nowhere.” See “The Proper Use of Humility,” and for an example of how delineations often should be large, “Universal Fire.”
On the first paragraph:
Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal.
Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of “the world is evil” otherwise it runs against facts. But the natural mental motion you make, as a default, should be, “How is this system produced by an aggressively neutral, entirely mechanistic reality?”
See the entire Sequence on evolution, as well as Beyond the Reach of God.
I mostly see where you’re coming from, but I think the reasonable answer to “point 1 or 2 is a false dichotomy” is this classic, uh, tumblr quote (from memory):
“People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail.”
This goes especially if the thing that comes after “just” is “just precommit.”
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don’t know if they’re correct, but I’d expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we’d all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
This is a very good criticism! I think you are right about people not being able to “just.”
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on “vibe” and on the arguments that people are making, such as “argument from cult.”
I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called “rationalists.” This comes off as sarcastic but I mean it completely literally.
Precommitting isn’t easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as “five minutes of actually trying” and alkjash’s “Hammertime.” Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.
Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.
They have a physiological problem.
They don’t believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of “exercise increases energy and happiness set point.”
They are fit.
Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don’t have to take Heroic Responsibility for the world, but you have to take it about yourself.)
A trope-y way of thinking about it is: “We’re supposed to be the good guys!” Good guys don’t have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.
It’s not just Vassar. It’s how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn’t to ignore him but to destroy his agency entirely. He’s still going to alter his decision theory towards rape even if castrated.
However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.
Can we have LessWrong not be Reddit? Let’s not be Reddit. Too late, we’re already Reddit. Fuck.
You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.
-
Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.
Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don’t. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it “divine intervention.”
There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won’t rape people, but you won’t report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this “swordfighting at the edge of a cliff while shouting about our ideologies.” I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.
If you use the “shoot him” strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn’t cooperating with Omegarapist, it’s thinking to oneself “he’s too useful to actually follow precommitments about punishing” if he defects against you. This is fucking dumb. There’s a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn’t pretty, and it’s also a very accurate depiction of the real world landscape.
Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.
You absolutely have a reason to believe the article is worth reading.
If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.
I don’t think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.
However, that’s not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).
I think the policy I follow (although I hadn’t made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.
Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn’t a massive update in the end, but it also wasn’t negligible. I also haven’t downvoted the OP, and I believe I also haven’t downvoted any comments from jessicata. I’ve upvoted some.
Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.
So, this seems deliberate.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I’d love whether anyone who’s nearer can confirm/deny the rumor and fill in missing pieces.
As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).
As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don’t think anyone is to blame for his having had a mental break in the first place.
I now got some better sourced information from a friend who’s actually in good contact with Eric. Given that I’m also quite certain that there were no drugs involved and that isn’t a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I’m currently hoping that Eric will tell his side himself so that there’s less indirection about the information sourcing so I’m not saying more about the detail at this point in time.
Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.
It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.
My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect or the onset of psychosis.
My delusions prominently featured Vassar. At the time of my arrest, I had a notebook with multiple entries stating “Vassar is God” and “Vassar is the Devil.” This fixation partly stemmed from a conversation with Vassar, where he suggested that my “pattern must be erased from the world” in response to my defense of EA. However, it was primarily fueled by the indirect influence of someone from his group with whom I had more substantial contact.
This individual was deeply involved in a psychological engagement with me in the months leading to my psychotic episode. In my weakened state, I was encouraged to develop and interact with a mental model of her. She once described our interaction as “roleplaying an unfriendly AI,” which I perceived as markedly hostile. Despite the negative turn, I continued the engagement, hoping to influence her positively.
After joining Vassar’s group, I urged her to critically assess his intense psychological methods. She relayed a conversation with Vassar about “fixing” another individual, Anna (Salamon), to “see material reality” and “purge her green.” This exchange profoundly disturbed me, leading to a series of delusions and ultimately exacerbating my psychological instability, culminating in a psychotic state. This descent into madness continued for approximately 36 hours, ending with an attempted suicide and an assault on a mental health worker.
Additionally, it is worth mentioning that I visited Leverage on the same day. Despite exhibiting clear signs of delusion, I was advised to exercise caution with psychological endeavors. Ideally, further intervention, such as suggesting professional help or returning me to my friends, might have been beneficial. I was later informed that I was advised to return home, though my recollection of this is unclear due to my mental state at the time.
In the hotel that night, my mental state deteriorated significantly after I performed a mental action which I interpreted as granting my mental model of Vassar substantial influence over my thoughts, in an attempt to regain stability.
While there are many more intricate details to this story, I believe the above summary encapsulates the most critical elements relevant to our discussion.
I do not attribute direct blame to Vassar, as it is unlikely he either intended or could have reasonably anticipated these specific outcomes. However, his approach, characterized by high-impact psychological interventions, can inadvertently affect the mental health of those around him. I hope that he has recognized this potential for harm and exercises greater caution in the future.
Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
If I’m trying to put my finger on a real effect here, it’s related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more “social/business development/management” end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).
As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.
Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
2017 would be the year Eric’s episode happened as well. Did this result in multiple conversation about “Michael Vassar is God” that Eric might then picked up when he hang around the group?
I don’t know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn’t causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
I haven’t used the word god myself nor have heard it used by other people to refer to someone who’s insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
Can someone please clarify what is meant in this conext by ‘Vassar’s group’, or the term ‘Vassarites’ used by others?
My intution previously was that Michael Vassar had no formal ‘group’ or insitution of any kind, and it was just more like ‘a cluster of friends who hung out together a lot’, but this comment makes it seem like something more official.
While “Vassar’s group” is informal, it’s more than just a cluster of friends; it’s a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like “the AI safety community” or “wokeness” or “the startup scene” that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I’ve ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.
Median Group is the closest thing to a “Vassarite” institution, in that its listed members are 2⁄3 people who I’ve heard/read describing the strong influence Vassar has had on their thinking and 1⁄3 people I don’t know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn’t claim to speak for the whole scene or anything.
Michael and I are sometimes-housemates and I’ve never seen or heard of any formal “Vassarite” group or institution, though he’s an important connector in the local social graph, such that I met several good friends through him.
It sounds like you’re saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.
ETA: In case it wasn’t clear, “that” = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric’s account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.
Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It’s not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.
To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.
(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)
I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.
(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the “Vassarites”, or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)
If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer. I’m not going to make my comments longer without a specific idea of what’s unclear, that seems pointless.
It is accurate to state that I constructed a model of him based on limited information, which subsequently contributed to my dramatic psychological collapse. Nevertheless, the reason for developing this particular model can be attributed to his interactions with me and others. This was not due to any extraordinary or mystical abilities, but rather his profound commitment to challenging individuals’ perceptions of conventional reality and mastering the most effective methods to do so.
This approach is not inherently negative. However, it must be acknowledged that for certain individuals, such an intense disruption of their perceived reality can precipitate a descent into a detrimental psychological state.
Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn’t mean to; sorry about that.
The thing I meant to characterize as “crazy cult behavior” was people in the comments here attributing things like what you did in your mind to Michael Vassar’s spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.
This can be unpacked into an alternative to the charisma theory.
Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There’s sufficient excess demand that even if someone doesn’t issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.
I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don’t think short-term use of antipsychotics was bad, in my case)
It is in this context that I’m reading that someone talking about the possibility of mental subprocess implantation (“demons”) should be “treated as a psychological emergency”, when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.
If someone expresses opinions like this, and I have reason to believe they would act on them, then I can’t believe myself to have freedom of speech. That might be better than them not sharing the opinions at all, but the social structural constraints this puts me under are obvious to anyone trying to see them.
Given what happened, I don’t think talking to a normal therapist would have been all that bad in 2017, in retrospect; it might have reduced the overall amount of psychiatric treatment needed during that year. I’m still really opposed to the coercive “you need professional help” framing in response to sharing weird thoughts that might be true, instead of actually considering them, like a Bayesian.
I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient’s buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn’t want it we would explore why given the very high risk level, and if they still said they didn’t want it then I would follow their direction.
I didn’t get a chance to talk to you during your episode, so I don’t know exactly what was going on. I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it’s possible to snap someone back to reality where they agree their weird thoughts aren’t true, but in severe psychosis it isn’t (I remember when I was a student I tried so hard to convince someone that they weren’t royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don’t treat the heart attack.
(although there’s a separate point where it would be wrong and objectifying to falsely claim someone who’s just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn’t sound like the people involved were making that mistake)
My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.
I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it’s something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.
I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
Ok, the opinions you’ve described here seem much more reasonable than what I remember, thanks for clarifying.
I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, since it’s a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom.
I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.
If you can show someone that they’re making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.
Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.
I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom.
If psychosis is caused by an underlying physiological/biochemical process, wouldn’t that suggest that e.g. exposure to Leverage Research wouldn’t be a cause of it?
If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?
I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that’s true, I’d expect changing someone’s environment to be more helpful for the former sort of case.
[probably old-hat [ETA: or false], but I’m still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the “triggers”, sets a person on a trajectory of less coherence / grounding; if the trajectory isn’t corrected, they just go further and further. The “triggers” might be multifarious; there might be “organic” psychosis and “psychic” psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can’t, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you’re generally stressed out because things are going wronger and wronger, which reinforces everything.
If this is true, then your statement:
. I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that’s kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom
is only true for some values of “guide them back to reality-based thoughts”. If you’re trying to help them go back to ignore-coping, you might partly succeed, but not in a stable way, because you only pushed the ball partway back up the hill, to mix metaphors—the ball is still on a slope and will roll back down when you stop pushing, the horrible fact is still revealed and will keeping being horrifying. But there’s other things you could do, like helping them find a non-ignore-cope for the fact; or show them enough that they become convinced that the belief isn’t true.
There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look.
I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn’t that they have sufficient evidence against them, it’s that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I’m probably not going to invest the time to test whether they can obtain novel information this way; I’m just going to assume they’re delusional because I’m confident spirits don’t exist.
That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.
If someone tells me that they can talk to their deceased parents, I’m probably not going to invest the time to test whether they can obtain novel information this way; I’m just going to assume they’re delusional because I’m confident spirits don’t exist.
This is failing to track ambiguity in what’s being refered to. If there’s something confusing happening—something that seems important or interesting, but that you don’t yet have words to well-articulate it—then you try to say what you can (e.g. by talking about “demons”). In your scenario, you don’t know exactly what you’re dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents’s brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can’t confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that’s naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their “self”, that encoded thought patterns from their parents, blah blah blah etc.). You can say “oh well yes of course if it’s *just a metaphor* maybe I don’t want to dismiss them”, but the point is that from a partially pre-theoretic confusion, it’s not clear what’s a metaphor and it requires further work to disambiguate what’s a metaphor.
I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.
Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.
Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.
This is really, really serious. If this happened to someone closer to me I’d be out for blood, and probably legal prosecution.
Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer’s writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.
The sentence is also misleading given Devi didn’t detransition afaik.
Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn’t do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.
Your story, original version:
I worked for MIRI/CFAR
I had a psychotic breakdown, and I believed I was super evil
the same thing also happened to a few other people
conclusion: MIRI/CFAR is responsible for all this
Your story, updated version:
I worked for MIRI/CFAR
then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
I actually used the drugs
I had a psychotic breakdown, and I believed I was super evil
the same thing also happened to a few other people
conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar’s role in this
If you can’t see how these two stories differ, then… I don’t have sufficiently polite words to describe it, so let’s just say that to me these two stories seem very different.
Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to collect them here, to separate them from the long stream of dark insinuations.) What I am saying is that you omitted a few “details”, which perhaps seem irrelevant to you, but in my opinion fundamentally change the meaning of the story.
At this moment, we just have to agree to disagree, I guess.
In my opinion, the greatest mistake MIRI/CFAR made in this story, was being associated with Michael Vassar in the first place (and that’s putting it mildly; at some moment it seemed like Eliezer was in love with him, he so couldn’t stop praising his high intelligence… well, I guess he learned that “alignment is more important than intelligence” applies not just to artificial intelligences but also to humans), providing him social approval and easy access to people who then suffered as a consequence. They are no longer making this mistake. Ironically, now it’s you, after having positioned yourself as a victim, who is blinded by his intelligence, and doesn’t see the harm he causes. But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably. So that he can no longer use the rationalist community as a “social proof” to get people’s trust.
EDIT: To explain my unkind words “after having positioned yourself as a victim”, the thing I am angry about is that you publicly describe your suffering as a way to show people that MIRI/CFAR is evil. But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually “helped you”.
So could you please make up your mind? Is having a psychotic breakdown and spending a few weeks catatonic in hospital a good thing or a bad thing? Is it trauma, or is it jailbreaking? Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.
I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(
Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.
I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don’t think I saw all of it.
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it’s typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
you publicly describe your suffering as a way to show people that MIRI/CFAR is evil.
Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.
Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.
I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it’s collapsing down stuff that shouldn’t be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the “central” people) the conditions where “psychosis” is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there’s disagreement about whether that’s the state of the world, but it’s not necessarily incoherent.)
I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as “just trying to state facts” in relation to other narrative fields; but this is hard to tell, since it’s also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.
Where did jessicata corroborate this sentence “then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil” ?
I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn’t see that as an unqualified endorsement—though I think your general message should be signal-boosted.
The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell?
To make the claim a bit more based on public data, take Vassar’s TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there’s a good chance that Vassar here actually believes what he says.
If you however look deeper then Jordan’s life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that’s an error that everybody can find who tries to check what Vassar is saying. I don’t think it’s in Vassar’s interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn’t have made an error like this but is a lot more controlled.
Eliezer made Vassar president of the precursor of MIRI. That’s a strong signal of trust and endorsement.
But from my perspective, you are an unreliable narrator.
I appreciate you’re telling me this given that you believe it. I definitely am in some ways, and try to improve over time.
then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman’s posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn’t have changed the text much.
In cases where someone was previously part of a “cult” and later says it was a “cult” and abusive in some important ways, there has to be a stage where they’re thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what else I have written.
Besides this, “in order to get a psychotic breakdown” is incredibly false about his intentions, as Zack Davis points out.
I actually used the drugs
This was not in the literally initial version of the post but was included within a few hours, I think, when someone pointed out to me that it was relevant.
But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably.
As I pointed out, this doesn’t obviously attribute less “spooky mind powers” to Michael Vassar compared with what Leverage was attributing to people, where Leverage attributing this (and isolating people from each other on the basis of it) was considered crazy and abusive. Maybe he really was this influential, but logical consistency is important here.
But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually “helped you”.
In this comment I’m saying he has an unclear and probably low amount of responsibility, so this is a misread.
So could you please make up your mind?
I was pretty clear in the text that there were trauma symptoms resulting from these events and they also had advantages such as gaining a new perspective, and that overall I don’t regret working at MIRI. I was also clear that there are relatively better and worse social contexts in which to experience psychosis symptoms, and hospitalization indicates a relatively worse social context.
None of us are calling for blame, ostracism, or cancelling of Michael.
What I’m saying is that the Berkeley community should be.
Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.
Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.
I’m not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.
It doesn’t make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it’s happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn’t, and they could have done better things instead. Even causal responsibility doesn’t imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already “not ok” in important ways, which probably affects the statistics.
Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I’ve ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).
gave someone an ill-advised drug combination and they had a bad time
I don’t remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying “yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really”, and then an ambulance had to be called.
So, I assume you meant that Olivia goes even far beyond this, right?
My memory of the RBC incident you’re referring to was that it wasn’t supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could’ve played a role I didn’t know about.
When I say that I believe Olivia is irresponsible with drugs, I’m not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.
A question for the ‘Vassarites’, if they will: were you doing anything like the “unihemispheric sleep” exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?
I banned him from SSC meetups for a combination of reasons including these
If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.
Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading.
For reasons I don’t fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything’s kind of been frozen in place since then.
I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.
It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn’t publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.
If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I’m not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.
I don’t think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren’t welcome.
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.
To me that suggests that there’s a problem of not sharing information about who’s banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.
It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?)
EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.
Legal threats matter a great deal for what can be done in a situation like this.
When it comes to a “global blacklist” there’s the question about governance. Who decides who’s on and who isn’t. When it comes to SSC or ACX meetups the governance question is clear. Anybody who’s organizing a meetup under those labels should follow Scott’s guidance.
That however only works if that information is communicated to meetup organizers.
So, it’s been a long time since I actually commented on Less Wrong, but since the conversation is here...
Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of… always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn’t talk directly, although we did occasionally participate in some of the same conversations online.
By all accounts, it sounds like he’s always been quite charismatic in person, and this isn’t the first time I’ve heard someone describe him as a “wizard.” But empirically, there are some people who’re very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn’t have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken. He evoked in a lot of people that feeling of “if these ideas are true, this is really huge,” but… there’s no shortage of ideas of ideas you can say that about, and I was always confused by the degree of credence people gave that his ideas were worth taking seriously. He always gave me a cult leaderish impression, in a way that, say, Eliezer never did, as encouraging other people to take seriously ideas which I couldn’t understand why they didn’t treat with more skepticism.
I haven’t thought about him in quite some time now, but I still distinctly remember that feeling of “why do these smart people around me take this person so seriously? I just don’t see how his explanations of his ideas justify that.”
I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to “shake off the fairy dust” and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I’m not too surprised by Scott’s revelations about him.
He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd.
Yeah, it definitely didn’t work on me. I believe I wrote this thread shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn’t easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn’t mention him by name.)
It saddens me to learn that his style of conversation/persuasion “works” on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).
One of the things that makes Michael Vassar an interesting person to be around is that he has an opinion about everything. If you locked him up in an empty room with grey walls, it would probably take the man about thirty seconds before he’d start analyzing the historical influence of the Enlightenment on the tradition of locking people up in empty rooms with grey walls.
I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.
Heh, the same feeling here. I didn’t have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn’t reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.
Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it’s all gibberish to me.
Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.
Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)
My probability distribution was gradually shifting from 1 to 3.
Not a direct response to you, but if anyone who hasn’t talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it’ll have a fair bit in it that’ll probably still seem false/confusing), you might try Spencer Greenberg’s podcast with Vassar.
As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he’s saying. I certainly did not fully succeed.
It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?
I would really like to understand what he’s getting at by the way, so if it is clearer for you than it is for me, I’d actively appreciate clarification.
Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.
In Harry Potter the standard practice seems to be to “eat chocolate” and perhaps “play with puppies” after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.
Then there is Gendlin’s Litany (and please note that I am linking to a critique, not to unadulterated “yay for the litany” ideas) which I believe is part of Lesswrong’s canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.
Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”
This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”
EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally NOT already enduring this truth. They’re enduring part of it (arguably most of it), but not all. Thinking about that truth is depressing for many people. That is not a meaningless cost. Telling people they should get over that depression and make good changes to fix the world is important. But saying that they are already enduring everything there was to endure, seems to me a patently false statement, and makes your argument weaker, not stronger.
The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is.
Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and “ethical”?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to “reliably and safely accomplish the goals” (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between “the status quo” and “a world where the goal has been accomplished”… thus, the litany itself:
What is true is already so. Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with. Anything untrue isn’t there to be lived. People can stand what is true, for they are already enduring it.
In my personal experience, as a person with feelings, is that I can only work on “the hot stuff” mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical “gravity well” of perspectives like this, which have internal logic that “makes as if to demand” that the perspective not be dropped, except maybe “at one’s personal peril”.
Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.
Another great option is “talk about it with your wisest and most caring grand parent (or parent)”.
Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution.
Also, you don’t have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?
Personally, I try not to put “ideas that seem particularly hot” on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.
However also, I don’t consider a given forum to be “the really real forum, where the grownups actually talk”… unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).
This leads me to be curious about any second thoughts or second feelings you’ve had, but only if you feel ok sharing them in this forum. Could you perhaps reply with: <silence> (a completely valid response, in my book) ”Mu.” (that is, being still in the space, but not wanting to pose or commit) ”The ideas still make me want to scream, but I can afford emitting these ~2 bits of information.” or “I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here’s what’s left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>”.
My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...
When I first met Vassar, it was a random encounter in an experimental group call organized by some small-brand rationalist. He talked for about an hour, and automatically became the center of conversation, I typed notes as fast as I could, thinking, “if this stuff is true it changes everything; it’s the [crux] of my life.” (It true, but I did not realize it immediately.) Randomly, another person found the link, came in and said, “hi”. [Vassar] said “hi”, she said “hi” again, apparently for humor. [Vassar] said something terse I forget “well if this is what …”, apparently giving up on the venue, and disconnected without further comment. One by one, the other ~10 people including besides her, including me disconnected disappointedly, wordlessly or just about right after. A wizard was gracing us with his wisdom and she fucked it up. And in my probably-representative case that was just about the only way I could communicate how frustrated I was at her for that.
Ziz’s perspective here gives you a pretty detailed example of how this social trick works (i.e. spontaneously pretend something someone else did was objectionable and use it as an excuse to make a fit/leave to make the other person walk on eggshells or chase you).
Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.
I want to add some context I think is important to this.
Jessica was (I don’t know if she still is) part of a group centered around a person named Vassar, informally dubbed “the Vassarites”. Their philosophy is complicated, but they basically have a kind of gnostic stance where regular society is infinitely corrupt and conformist and traumatizing and you need to “jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself). Jailbreaking involves a lot of tough conversations, breaking down of self, and (at least sometimes) lots of psychedelic drugs.
Vassar ran MIRI a very long time ago, but either quit or got fired, and has since been saying that MIRI/CFAR is also infinitely corrupt and conformist and traumatizing (I don’t think he thinks they’re worse than everyone else, but I think he thinks they had a chance to be better, they wasted it, and so it’s especially galling that they’re just as bad). Since then, he’s tried to “jailbreak” a lot of people associated with MIRI and CFAR—again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs. The combination of drugs and paranoia caused a lot of borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird”). Occasionally it would also cause full-blown psychosis, which they would discourage people from seeking treatment for, because they thought psychiatrists were especially evil and corrupt and traumatizing and unable to understand that psychosis is just breaking mental shackles.
(I am a psychiatrist and obviously biased here)
Jessica talks about a cluster of psychoses from 2017 − 2019 which she blames on MIRI/CFAR. She admits that not all the people involved worked for MIRI or CFAR, but kind of equivocates around this and says they were “in the social circle” in some way. The actual connection is that most (maybe all?) of these people were involved with the Vassarites or the Zizians (the latter being IMO a Vassarite splinter group, though I think both groups would deny this characterization). The main connection to MIRI/CFAR is that the Vassarites recruited from the MIRI/CFAR social network.
I don’t have hard evidence of all these points, but I think Jessica’s text kind of obliquely confirms some of them. She writes:
RD Laing was a 1960s pseudoscientist who claimed that schizophrenia is how “the light [begins] to break through the cracks in our all-too-closed minds”. He opposed schizophrenics taking medication, and advocated treatments like “rebirthing therapy” where people role-play fetuses going through the birth canal—for which he was stripped of his medical license. The Vassarites like him, because he is on their side in the whole “actually psychosis is just people being enlightened as to the true nature of society” thing. I think Laing was wrong, psychosis is actually bad, and that the “actually psychosis is good sometimes” mindset is extremely related to the Vassarites causing all of these cases of psychosis.
Jessica is accusing MIRI of being insufficiently supportive to her by not taking her talk about demons and auras seriously when she was borderline psychotic, and comparing this to Leverage, who she thinks did a better job by promoting an environment where people accepted these ideas. I think MIRI was correct to be concerned and (reading between the lines) telling her to seek normal medical treatment, instead of telling her that demons were real and she was right to worry about them, and I think her disagreement with this is coming from a belief that psychosis is potentially a form of useful creative learning. While I don’t want to assert that I am 100% sure this can never be true, I think it’s true rarely enough, and with enough downside risk, that treating it as a psychiatric emergency is warranted.
On the two cases of suicide, Jessica writes:
Ziz tried to create an anti-CFAR/MIRI splinter group whose members had mental breakdowns. Jessica also tried to create an anti-CFAR/MIRI splinter group and had a mental breakdown. This isn’t a coincidence—Vassar tried his jailbreaking thing on both of them, and it tended to reliably produce people who started crusades against MIRI/CFAR, and who had mental breakdowns. Here’s an excerpt from Ziz’s blog on her experience (edited heavily for length, and slightly to protect the innocent):
Ziz is describing the same cluster of psychoses Jessica is (including Jessica’s own), but I think doing so more accurately, by describing how it was a Vassar-related phenomenon. I would add Ziz herself to the list of trans women who got negative mental effects from Vassar, although I think (not sure) Ziz would not endorse my description of her as having these.
What was the community’s response to this? I have heard rumors that Vassar was fired from MIRI a long time ago for doing some very early version of this, although I don’t know if it’s true. He was banned from REACH (and implicitly rationalist social events) for somewhat unrelated reasons. I banned him from SSC meetups for a combination of reasons including these. For reasons I don’t fully understand and which might or might not be related to this, he left the Bay Area. This was around the time COVID happened, so everything’s kind of been frozen in place since then.
I want to clarify that I don’t dislike Vassar, he’s actually been extremely nice to me, I continue to be in cordial and productive communication with him, and his overall influence on my life personally has been positive. He’s also been surprisingly gracious about the fact that I go around accusing him of causing a bunch of cases of psychosis. I don’t think he does the psychosis thing on purpose, I think he is honest in his belief that the world is corrupt and traumatizing (which at the margin, shades into values of “the world is corrupt and traumatizing” which everyone agrees are true) and I believe he is honest in his belief that he needs to figure out ways to help people do better. There are many smart people who work with him and support him who have not gone psychotic at all. I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people. My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
EDIT/UPDATE: I got a chance to talk to Vassar, who disagrees with my assessment above. We’re still trying to figure out the details, but so far, we agree that there was a cluster of related psychoses around 2017, all of which were in the same broad part of the rationalist social graph. Features of that part were—it contained a lot of trans women, a lot of math-y people, and some people who had been influenced by Vassar, although Vassar himself may not have been a central member. We are still trying to trace the exact chain of who had problems first and how those problems spread. I still suspect that Vassar unwittingly came up with some ideas that other people then spread through the graph. Vassar still denies this and is going to try to explain a more complete story to me when I have more time.
Digging out this old account to point out that I have not in fact detransitioned, but find it understandable why those kinds of rumours would circulate given my behaviour during/around my experience of psychosis. I’ll try to explain some context for the record.
In other parts of the linked blogpost Ziz writes about how some people around the rationalist community were acting on or spreading variations of the meme “trans women are [psychologically] men”. I experienced this while dating AM (same as mentioned above). She repeatedly brought up this point in various interactions. Since we were both trans women this was hurting us both, so I look back with more pity than concern about malice. At some point during this time I started treating this as a hidden truth that I was proud of myself for being able to see, which I in retrospect I feel disgusted and complicit to have accepted. This was my state of mind when I discussed these issues with Zack reinforcing each others views. I believe (less certain) I also broached the topic with Michael and/or Anna at some point which probably went like a brief mutual acknowledgement of this hidden fact before continuing on to topics that were more important.
I don’t think anyone mentioned above was being dishonest about what they thought or was acting from a desire to hurt trans people. Yet, above exchanges did in retrospect cause me emotional pain, stress, and contributed to internalizing sexism and transphobia. I definitely wouldn’t describe this as a main causal factor to my psychosis (that was very casual drug use that even Michael chided me for). I cant’ think of a good policy that would have been helpful to me in above interactions. Maybe emphasizing bucket-errors in this context more, or spreading caution about generalizing from abstract models to yourself, but I think I would have been too rash to listen.
I wouldn’t say I completely moved past this until years following the events. I think the following things were helpful for that (in no particular order): the intersex brains model and associated brain imagining studies, everyday-acceptance while living a normal life not allowing myself concerns larger than renovations or retirement savings, getting to experience some parts of female socialization and mother-daughter bonding, full support from friends and family in cases my gender has come into question, and the acknowledgement of a medical system that still has some gate-keeping aspects (note: I don’t think this positive effect of a gate-keeping system at all justifies the negative of denying anyone morphological freedom).
Thinking back to these events, engaging with the LessWrong community, and even publicly engaging under my real name bring back fear and feelings of trauma. I’m not saying this to increase a sense of having been wronged but as an apology for this not being as long as it should be, or as well-written, and for the lateness/absence of any replies/followups.
I want to point out that the level of mental influence being attributed to Michael in this comment and others (e.g. that he’s “causing psychotic breaks” and “jailbreaking people” through conversation, “that listening too much to Vassar [causes psychosis], predictably”) isn’t obviously less than the level of mental influence Leverage attributed to people in terms of e.g. mental objects. Some people in the thread are self-congratulating on the rationalists not being as crazy and abusive as Leverage was in worrying that people were spreading harmful psychological objects to each other, and therefore isolating these people from their friends. Yet many in this comment thread are, literally, calling for isolating Michael Vassar from his friends on the basis of his mental influence on others.
Yes, I agree with you that all of this is very awkward.
I think the basic liberal model where everyone uses Reason a lot and we basically trust their judgments is a good first approximation and we should generally use it.
But we have to admit at least small violations of it even to get the concept of “cult”. Not just the sort of weak cults we’re discussing here, but even the really strong cults like Heaven’s Gate or Jamestown. In the liberal model, someone should be able to use Reason to conclude that being in Heaven’s Gate is bad for them, and leave. When we use the word “cult”, we’re implicitly agreeing that this doesn’t always work, and we’re bringing in creepier and less comprehensible ideas like “charisma” and “brainwashing” and “cognitive dissonance”.
(and the same thing with the concept of “emotionally abusive relationship”)
I don’t want to call the Vassarites a cult because I’m sure someone will confront me with a Cult Checklist that they don’t meet, but I think that it’s not too crazy to argue that some of these same creepy ideas like charisma and so on were at work there. And everyone knows cults can get weird and end in mental illness. I agree it’s weird that you can get that far without robes and chanting or anything, but I never claimed to really understand exactly how cults work, plus I’m sure the drugs helped.
I think believing cults are possible is different in degree if not in kind from Leverage “doing seances...to call on demonic energies and use their power to affect the practitioners’ social standing”. I’m claiming, though I can’t prove it, that what I’m saying is more towards the “believing cults are possible” side.
I’m actually very worried about this! I hate admitting cults are possible! If you admit cults are possible, you have to acknowledge that the basic liberal model has gaps, and then you get things like if an evangelical deconverts to atheism, the other evangelicals can say “Oh, he’s in a cult, we need to kidnap and deprogram him since his best self wouldn’t agree with the deconversion.” I want to be extremely careful in when we do things like that, which is why I’m not actually “calling for isolating Michael Vassar from his friends”. I think in the Outside View we should almost never do this!
But you were the one to mention this cluster of psychotic breaks, and I am trying to provide what I think is a more accurate perspective on them. Maybe in the future we learn that this was because of some weird neuroinflammatory virus that got passed around at a Vassarite meeting and we laugh that we were ever dumb enough to think a person/group could transmit psychotic breaks. But until then, I think the data point that all of this was associated with Vassar and the Vassarites is one we shouldn’t just ignore.
It seems to me like in the case of Leverage, them working 75 hours per week reduced the time the could have used to use Reason to conclude that they are in a system that’s bad for them.
That’s very different from someone having a few conversation with Vassar and then adopting a new belief and spending a lot of the time reasoning about that alone and the belief being stable without being embedded into a strong enviroment that makes independent thought hard because it keeps people busy.
A cult in it’s nature is a social institution and not just a meme that someone can pass around via having a few conversations.
Perhaps the proper word here might be “manipulation” or “bad influence”.
I think “mind virus” is fair. Vassar spoke a lot about how the world as it is can’t be trusted. I remember that many of the people in his circle spoke, seemingly apropos of nothing, about how bad involuntary commitment is, so that by the time someone was psychotic their relationship with psychiatry and anyone who would want to turn to psychiatry to help them was poisoned. Within the envelope of those beliefs you can keep a lot of other beliefs safe from scrutiny.
The thing with “bad influence” is that it’s a pretty value-laden thing. In a religious town the biology teacher who tells the children about evolution and explains how it makes sense that our history goes back a lot further then a few thousands years is reasonably described as bad influence by the parents.
The religion teacher gets the children to doubt the religious authorities. Those children then can also be a bad influence on others by also getting them to doubt authorities. In a similar war Vassar gets people to question other authorities and social conventions and how those ideas can then be passed on.
Vassar speaks about things like Moral Mazes. Memes like that make people distrust institutions. There are the kind of bad influence that can get people to quit their job.
Talking about the biology teacher like they are intend to start an evolution cult feels a bit misleading.
It seems to me that, at least in your worldview, this question of whether and what sort of subtle mental influence between people is possible is extremely important, to the point where different answers to the question could lead to pretty different political philosophies.
Let’s consider a disjunction: 1: There isn’t a big effect here, 2: There is a big effect here.
In case 1:
It might make sense to discourage people from talking too much about “charisma”, “auras”, “mental objects”, etc, since they’re pretty fake, really not the primary factors to think about when modeling society.
The main problem with the relevant discussions at Leverage is that they’re making grandiose claims of mind powers and justifying e.g. isolating people on the basis of these, not actual mental influence.
The case made against Michael, that he can “cause psychotic breaks” by talking with people sometimes (or, in the case of Eric B, by talking sometimes with someone who is talking sometimes with the person in question), has no merit. People are making up grandiose claims about Michael to justify scapegoating him, it’s basically a witch hunt. We should have a much more moderated, holistic picture where there are multiple people in a social environment affecting a person, and the people closer to them generally have more influence, such that causing psychotic breaks 2 hops out is implausible, and causing psychotic breaks with only occasional conversation (and very little conversation close to the actual psychotic episode) is also quite unlikely.
There isn’t a significant falsification of liberal individualism.
In case 2:
Since there’s a big effect, it makes sense to spend a lot of energy speculating on “charisma”, “auras”, “mental objects”, and similar hypotheses. “Charisma” has fewer details than “auras” which has fewer details than “mental objects”; all of them are hypotheses someone could come up with in the course of doing pre-paradigmatic study of the phenomenon, knowing that while these initial hypotheses will make mis-predictions sometimes, they’re (in expectation) moving in the direction of clarifying the phenomenon. We shouldn’t just say “charisma” and leave it at that, it’s so important that we need more details/gears.
Leverage’s claims about weird mind powers are to some degree plausible, there’s a big phenomenon here even if their models are wrong/silly in some places. The weird social dynamics are a result of an actual attempt to learn about and manage this extremely important phenomenon.
The claim that Michael can cause psychotic breaks by talking with people is plausible. The claim that he can cause psychotic breaks 2 hops out might be plausible depending on the details (this is pretty similar to a “mental objects” claim).
There is a significant falsification of liberal individualism. Upon learning about the details of how mental influence works, you could easily conclude that some specific form of collectivism is much more compatible with human nature than liberal individualism.
(You could make a spectrum or expand the number of dimensions here, I’m starting with a binary here to make the poles obvious)
It seems like you haven’t expressed a strong belief whether we’re in case 1 or case 2. Some things you’ve said are more compatible with case 1 (e.g. Leverage worrying about mental objects being silly, talking about demons being a psychiatric emergency, it being appropriate for MIRI to stop me from talking about demons and auras, liberalism being basically correct even if there are exceptions). Some are more compatible with case 2 (e.g. Michael causing psychotic breaks, “cults” being real and actually somewhat bad for liberalism to admit the existence of, “charisma” being a big important thing).
I’m left with the impression that your position is to some degree inconsistent (which is pretty normal, propagating beliefs fully is hard) and that you’re assigning low value to investigating the details of this very important variable.
(I myself still have a lot of uncertainty here; I’ve had the impression of subtle mental influence happening from time to time but it’s hard to disambiguate what’s actually happening, and how strong the effect is. I think a lot of what’s going on is people partly-unconsciously trying to synchronize with each other in terms of world-model and behavioral plans, and there existing mental operations one can do that cause others’ synchronization behavior to have weird/unexpected effects.)
I agree I’m being somewhat inconsistent, I’d rather do that than prematurely force consistency and end up being wrong or missing some subtlety. I’m trying to figure out what went on in these cases in more details and will probably want to ask you a lot of questions by email if you’re open to that.
Yes, I’d be open to answering email questions.
This misses the fact that people’s ability to negatively influence others might vary very widely, making it so that it is silly to worry about, say, 99.99% of people strongly negatively influencing you, but reasonable to worry about the other 0.01%. If Michael is one of those 0.01%, then Scott’s worldview is not inconsistent.
If it’s reasonable to worry about the .01%, it’s reasonable to ask how the ability varies. There’s some reason, some mechanism. This is worth discussing even if it’s hard to give more than partial, metaphorical hypotheses. And if there are these .01% of very strong influencers, that is still an exception to strong liberal individualism.
That would still admit some people at Leverage having significant mental influence, especially if they got into weird mental tech that almost no one gets into. A lot of the weirdness is downstream of them encountering “body workers” who are extremely good at e.g. causing mental effects by touching people’s back a little; these people could easily be extremal, and Leverage people learned from them. I’ve had sessions with some post-Leverage people where it seemed like really weird mental effects are happening in some implicit channel (like, I feel a thing poking at the left side of my consciousness and the person says, “oh, I just did an implicit channel thing, maybe you felt that”), I’ve never experienced effects like that (without drugs, and not obviously on drugs either though the comparison is harder) with others including with Michael, Anna, or normal therapists. This could be “placebo” in a way that makes it ultimately not that important but still, if we’re admitting that 0.01% of people have these mental effects then it seems somewhat likely that this includes some Leverage people.
Also, if the 0.01% is disproportionately influential (which, duh), then getting more detailed models than “charisma” is still quite important.
One important implication of “cults are possible” is that many normal-seeming people are already too crazy to function as free citizens of a republic.
In other words, from a liberal perspective, someone who can’t make their own decisions about whether to hang out with Michael Vassar and think about what he says is already experiencing a severe psychiatric emergency and in need of a caretaker, since they aren’t competent to make their own life decisions. They’re already not free, but in the grip of whatever attractor they found first.
Personally I bite the bullet and admit that I’m not living in a society adequate to support liberal democracy, but instead something more like what Plato’s Republic would call tyranny. This is very confusing because I was brought up to believe that I lived in a liberal democracy. I’d very much like to, someday.
I think there are less extreme positions here. Like “competent adults can make their own decisions, but they can’t if they become too addicted to certain substances.” I do think manipulation by others can rise to the level of drugs and is an exceptional case, not proof that a lot of people are fundamentally incapable of being free.
I think the principled liberal perspective on this is Bryan Caplan’s: drug addicts have or develop very strong preferences for drugs. The assertion that they can’t make their own decisions is a declaration of intent to coerce them, or an arrogation of the right to do so.
I don’t think that many people are “fundamentally incapable of being free.” But it seems like some people here are expressing grievances that imply that either they themselves or some others are, right now, not ready for freedom of association.
The claim that someone is dangerous enough that they should be kept away from “vulnerable people” is a declaration of intent to deny “vulnerable people” freedom of association for their own good. (No one here thinks that a group of people who don’t like Michael Vassar shouldn’t be allowed to get together without him.)
I really don’t think this is an accurate description of what is going on in people’s mind when they are experiencing drug dependencies. I’ve spent a good chunk of my childhood with an alcoholic father, and he would have paid most of his wealth to stop being addicted to drinking, went through great lengths trying to tie himself to various masts to stop, and generally expressed a strong preference for somehow being able to self-modify the addiction away, but ultimately failed to do so.
Of course, things might be different for different people, but at least in the one case where I have a very large amount of specific data, this seems like it’s a pretty bad model of people’s preferences. Based on the private notebooks of his that I found after his death, this also seemed to be his position in purely introspective contexts without obvious social desirability biases. My sense is that he would have strongly preferred someone to somehow take control away from him, in this specific domain of his life.
This seems like some evidence that the principled liberal position is false—specifically, that it is not self-ratifying. If you ask some people what their preferences are, they will express a preference for some of their preferences to be thwarted, for their own good.
Contractarianism can handle this sort of case, but liberal democracy with inalienable rights cannot, and while liberalism is a political philosophy, contractarianism is just a policy proposal, with no theory of citizenship or education.
https://en.wikipedia.org/wiki/Olivier_Ameisen
A sidetrack, but a French surgeon found that Baclofen (a muscle relaxant) cured his alcoholism by curing the craving. He was surprised to find that it cured compulsive spending when he didn’t even realize he had a problem.
He had a hard time raising money for an official experiment, and it came out inconclusive, and he died before the research got any further.
This is more-or-less Aristotle’s defense of (some cases of) despotic rule: it benefits those that are naturally slaves (those whose deliberative faculty functions below a certain threshold) in addition to the despot (making it a win-win scenario).
Aristotle seems (though he’s vague on this) to be thinking in terms of fundamental attributes, while I’m thinking in terms of present capacity, which can be reduced by external interventions such as schooling.
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
*As far as I know I didn’t know any such people before 2020; it’s very easy for members of the educated class to mistake our bubble for statistical normality.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this:
- I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity.
- I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments.
It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
What are your or Vassar’s arguments against EA or AI alignment? This is only tangential to your point, but I’d like to know about it if EA and AI alignment are not important.
The general argument is that EA’s are not really doing what they say they do. One example from Vassar would be that when it comes to COVID-19 for example there seem to be relatively little effective work by EA’s. In contrast Vassar considered giving prisoners access to personal equipment the most important and organized effectively for that to happen.
EA’s created in EA Global an enviroment where someone who wrote a good paper warning about the risks of gain-of-function research doesn’t address that directly but only talks indirectly about it to focus on more meta-issues. Instead of having conflicts with people doing gain-of-function research the EA community mostly ignored it’s problems and funded work that’s in less conflict with the establishment. There’s nearly no interest in learning from those errors in the EA community and people rather avoid conflicts.
If you read the full comments of this thread you will find reports that CEA used legal threats to cover up Leverage related information.
AI alignment is important but just because one “works on AI risk” doesn’t mean that the work actually decreases AI risk. Tying your personal identity to being someone who works to decrease AI risk makes it hard to clearly reason about whether one’s actions actually do. OpenAI would be an organization where people who see themselves as “working on AI alignment” work and you can look at the recent discussion that whether or not that work reduces or increases actual risk is in open debate.
In a world where human alignment doesn’t work to prevent dangerous gain of function experiments from happening thinking about AI alignment instead of the problem of human alignment where it’s easier to get feedback might be the wrong strategic focus.
Did Vassar argue that existing EA organizations weren’t doing the work they said they were doing, or that EA as such was a bad idea? Or maybe that it was too hard to get organizations to do it?
He argued
(a) EA orgs aren’t doing what they say they’re doing (e.g. cost effectiveness estimates are wildly biased, reflecting bad procedures being used internally), and it’s hard to get organizations to do what they say they do
(b) Utilitarianism isn’t a form of ethics, it’s still necessary to have principles, as in deontology or two-level consequentialism
(c) Given how hard it is to predict the effects of your actions on far-away parts of the world (e.g. international charity requiring multiple intermediaries working in a domain that isn’t well-understood), focusing on helping people you have more information about makes sense unless this problem can be solved
(d) It usually makes more sense to focus on ways of helping others that also build capacities, including gathering more information, to increase long-term positive impact
If you for example want the critcism on GiveWell, Ben Hoffman was employed at GiveWell and made experiences that suggest that the process based on which their reports are made has epistemic problems. If you want the details talk to him.
The general model would be that between actual intervention and the top there are a bunch of maze levels. GiveWell then hired normal corporatist people who behave in the dynamics that the immoral maze sequence describes play themselves out.
Vassar’s action themselves are about doing altruistic actions more directly by looking for who are most powerless who need help and working to help them. In the COVID case he identified prisoners and then worked on making PPE available for them.
You might see his thesis is that “effective” in EA is about adding a management layer for directing interventions and that management layer has the problems that the immoral maze sequence describes. According to Vassar someone who wants to be altrustic shouldn’t delegate his judgements of what’s effective and thus warrents support to other people.
Link? I’m not finding it
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=zqcynfzfKma6QKMK9
I think what you’re pointing to is:
I’m getting a bit pedantic, but I wouldn’t gloss this as “CEA used legal threats to cover up Leverage related information”. Partly because the original bit is vague, but also because “cover up” implies that the goal is to hide information.
For example, imagine companies A and B sue each other, which ends up with them settling and signing an NDA. Company A might accept an NDA because they want to move on from the suit and agreeing to an NDA does that most effectively. I would not describe this as company A using legal threats to cover up B-related information.
In the timeframe CEA and Leverage where doing together the Pareto Fellowship. If you read the common knowledge post you find people finding that they were mislead by CEA because the announcement didn’t mention that the Pareto Fellowship was largely run by Leverage.
On their mistakes page CEA, they have a section about the Pareto Fellowship but it hides the fact that Leverage was involved in the Pareto Fellowship but says “The Pareto Fellowship was a program sponsored by CEA and run by two CEA staff, designed to deepen the EA involvement of promising students or people early in their careers.”
That does look to me like hiding information about the cooperation between Leverage and CEA.
I do think that publically presuming that people who hide information have something to hide is useful. If there’s nothing to hide I’d love to know what happened back then or who thinks what happened should stay hidden. At the minimum I do think that CEA witholding the information that the people who went to their programs spend their time in what now appears to be a cult is something that CEA should be open about in their mistakes page.
Yep, I think CEA has in the past straightforwardly misrepresented (there is a talk on the history of EA by Will and Toby that says some really dubious things here, IIRC) and sometimes even lied in order to not mention Leverage’s history with Effective Altruism. I think this was bad, and continues to be bad.
My initial thought on reading this was ‘this seems obviously bad’, and I assumed this was done to shield CEA from reputational risk.
Thinking about it more, I could imagine an epistemic state I’d be much more sympathetic to: ‘We suspect Leverage is a dangerous cult, but we don’t have enough shareable evidence to make that case convincingly to others, or we aren’t sufficiently confident ourselves. Crediting Leverage for stuff like the EA Summit (without acknowledging our concerns and criticisms) will sound like an endorsement of Leverage, which might cause others to be drawn into its orbit and suffer harm. But we don’t feel confident enough to feel comfortable tarring Leverage in public, or our evidence was shared in confidence and we can’t say anything we expect others to find convincing. So we’ll have to just steer clear of the topic for now.’
Still seems better to just not address the subject if you don’t want to give a fully accurate account of it. You don’t have to give talks on the history of EA!
I think the epistemic state of CEA was some mixture of something pretty close to what you list here, and something that I would put closer to something more like “Leverage maybe is bad, or maybe isn’t, but in any case it looks bad, and I don’t think I want people to think EA or CEA is bad, so we are going to try to avoid any associations between these entities, which will sometimes require stretching the truth”.
That has the collary: “We don’t expect EA’s to care enough about the truth/being transparent that this is a huge reputational risk for us.”
It does look weird to me that CEA doesn’t include this on the mistakes page when they talk about Pareto. I just sent CEA an email to ask:
They wrote back, linking me to https://www.lesswrong.com/posts/Kz9zMgWB5C27Pmdkh/common-knowledge-about-leverage-research-1-0?commentId=2QcdhTjqGcSc99sNN
(“we’re working on a couple of updates to the mistakes page, including about this”)
Yep, I think the situation is closer to what Jeff describes here, though, I honestly don’t actually know, since people tend to get cagey when the topic comes up.
I talked with Geoff and according to him there’s no legal contract between CEA and Leverage that prevents information sharing. All information sharing is prevented by organization internal NDA’s.
Huh, that’s surprising, if by that he means “no contracts between anyone currently at Leverage and anyone at CEA”. I currently still think it’s the case, though I also don’t see any reason for Geoff to lie here. Maybe there is some technical sense in which there is no contract between Leverage and CEA, but there are contracts between current Leverage employees, who used to work at CEA, and current CEA employees?
What he said is compatible with Ex-CEA people still being bound by the NDA’s they signed they were at CEA. I don’t think anything happened that releases ex-CEA people from NDAs.
The important thing is that CEA is responsible for those NDA and is free to unilaterally lift them if they would have an interest in the free flow of information. In the case of a settlement with contracts between the two organisations CEA couldn’t unilaterally lift the settlement contract.
Public pressure on CEA seems to be necessary to get the information out in the open.
Talking with Vassar feels very intellectually alive. Maybe, like a high density of insight porn. I imagine that the people Ben talks about wouldn’t get much enjoyment out of insight porn either, so that emotional impact isn’t there.
There’s probably also an element that plenty of people who can normally follow an intellectual conversation can’t keep up a conversation with Vassar and then are filled after a conversation with a bunch of different ideas that lack order in their mind. I imagine that sometimes there’s an idea overload that prevents people from critically thinking through some of the ideas.
If you have a person who hasn’t gone to college, they are used to encountering people who make intellectual arguments that go over their head and have a way to deal with that.
From meeting Vassar, I don’t feel like he has the kind of charisma that someone like Valentine has (which I guess Valentine has downstream of doing a lot of bodywork stuff).
This seems mostly right; they’re more likely to think “I don’t understand a lot of these ideas, I’ll have to think about this for a while” or “I don’t understand a lot of these ideas, he must be pretty smart and that’s kinda cool” than to feel invalidated by this and try to submit to him in lieu of understanding.
The people I know who weren’t brought up to go to college have more experience navigating concrete threats and dangers, which can’t be avoided through conformity, since the system isn’t set up to take care of people like them. They have to know what’s going on to survive. This results in an orientation less sensitive to subtle threats of invalidation, and that sees more concrete value in being informed by someone.
In general this means that they’re much more comfortable with the kind of confrontation Vassar engages in, than high-class people are.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
This is interesting to me because I was brought up to go to college, but I didn’t take it seriously (plausibly from depression or somesuch), and I definitely think of him as a guy with an interesting perspective. Okay, a smart guy with an interesting perspective, but not a god.
It had never occurred to me before that maybe people who were brought up to assume they were going to college might generally have a different take on the world than I do.
I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don’t think you’re being fair.
I’m confident this is only a Ziz-ism: I don’t recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I can’t speak for Michael or his friends, and I don’t want to derail the thread by going into the details of my own situation. (That’s a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there’s a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it’s a cult), having a mental breakdown is an understandable reaction. It’s not that mental breakdowns are in any way good—in a saner world, that wouldn’t happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a “deep emotional break with the wisdom of [your] pack”, the mental breakdown might actually be less bad in the long run, even if it’s locally extremely bad.
I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)
(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn’t come—but I was discharged normally; he didn’t bust me out.)
I don’t want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn’t harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I’m suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their “it’s correct to be freaking about learning your entire society is corrupt and gaslighting” shtick.
I more or less Outside View agree with you on this, which is why I don’t go around making call-out threads or demanding people ban Michael from the community or anything like that (I’m only talking about it now because I feel like it’s fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) “This guy makes people psychotic by talking to them” is a silly accusation to go around making, and I hate that I have to do it!
But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.
I think the minimum viable narrative here is, as you say, something like “Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs.” Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can’t trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the “he’s just having normal truth-seeking conversation” objection. He also seems really good at pushing trans people’s buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don’t know how it happens, I’m sufficiently embarrassed to be upset about something which looks like “having a nice interesting conversation” from the outside, and I don’t want to violate liberal norms that you’re allowed to have conversations—but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.
Maybe one analogy would be people with serial emotional abusive relationships—should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you’ve got to at least leave that possibility open for when things get really weird.
Thing 0:
Scott.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can’t remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, “Huh, I need to only be convinced by true things.”
This is extremely relatable to my lived experience. I am a stereotypical “high-functioning autist.” I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.
To the degree that “rationality styles” are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.
Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.
Thing 1:
Imagine two world models:
Some people want to act as perfect nth-order cooperating utilitarians, but can’t because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: “Stop rationalizing.” Then the humans revert to the all-consuming anguish.
A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.
Which of these world models is correct? Both, obviously, because we’re all smart people here and understand the Machiavellian Intelligence Hypothesis.
Thing 2:
Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)
You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?
Ignore him. This is good for AI-box reasons, but bad because you don’t learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.
Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.
1a. Precommit to only talk with him if he castrates himself first.
1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.
I made those in 1 minute of actually trying.
Returning to the object level, let us consider Michael Vassar.
Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.
1a. Vassar can participate but will be shunned if he talks about “drama” in the rationality community or its social structure.
1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.
2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry.
I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!
I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?
The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn’t rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.
You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don’t we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.
“Diversity of thought is good.”
“I have a diverse opinion on the merits of vaccination.”
“Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence.”
“When does diversity of thought lead to coercion or violence?”
“When I, or the WHO, say so. Shut up, prole.”
This is actually quite a few skulls, but everything has quite a few skulls. People die very often.
Thing 3:
Now let me address a counterargument:
Argument 1: “Vassar’s belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory.”
Here’s the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.
Argument 2: “The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They ‘logically deduce’ the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people’s current behavior and coerce them into giving up their agency.”
There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of “traditional living/wisdom” are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)
There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. “In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition.”
THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See “A formalist manifesto” by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of “legitimate information” or “self-locating information” to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])
The only real social epistemologies are of the form:
“Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence).”
Mine is particular is, “Free speech but no (intentionally and directly inciting panic or violence using falsehoods).”
To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off.
Thing 4:
Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.
Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz’s blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.
MIRI payed out to blackmail. There’s an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn’t actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I’m sorry but it’s true, anyways please write Arcane Ascension book 4.)
I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.
He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)
I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not as a club member.
What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).
Now I am significantly happier, more agentic, and more rational.
Thing 5
When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn’t supposed to be easy. Have you seen mathematical logic? (It’s my favorite field).
An example of an important idea that may come from Vassar, but is likely much older:
Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who “matter.” Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.
Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.
However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.
Benjamin Ross Hoffman’s blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.
Thing 6:
I’m almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.
Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.
These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called “actually listening to arguments.” When I’m debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.
Uh, thanks for reading, I hope this was coherent, have a nice day.
I enjoyed reading this. Thanks for writing it.
One note though: I think this post (along with most of the comments) isn’t treating Vassar as a fully real person with real choices. It (also) treats him like some kind of ‘force in the world’ or ‘immovable object’. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I’m glad you yourself were able to “With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life.”
But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are.
I think it’s pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that’s in his capacity, which I think is a lot.
I might think this was a worthwhile tradeoff if I actually believed the ‘maybe insane’ part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn’t want to offer this mental training himself; that isn’t much of an excuse, in my book, to target people who are ‘close to the edge’ (where ‘edge’ might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them.
His service is well-intentioned, but he’s not doing it wisely and compassionately, as far as I can tell.
I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.
In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…
I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models.
If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren’t typical for the threat.
I am not sure how much ‘not destabilize people’ is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of “you are expected to behave better for status reasons look at my smug language”-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.
In the pathological case of Vassar, I think the naive strategy of “just say the thing you think is true” is still correct.
Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The “maybe insane” part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.
I’m glad you enjoyed the post.
My suggestion for Vassar is not to ‘try not to destabilize people’ exactly.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking “at” rather than talking “to” or “with”. The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things.
I expect this process could take a long time / run into issues along the way, and so I don’t think it should be rushed. Not expecting a quick change. But claiming there’s no available option seems wildly wrong to me. People aren’t fixed points and generally shouldn’t be treated as such.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher’s information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
Thanks!
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he’s pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he’s speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.
Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame.
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.
As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.
If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can’t be disassociated anymore, that’s very predicably going to have a negative effect on that prison guard.
Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard.
I think this line of discussion would be well served by marking a natural boundary in the cluster “crazy.” Instead of saying “Vassar can drive people crazy” I’d rather taboo “crazy” and say:
Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it’s desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles’ reproductive cycle by resembling the moon too much.
My problem with this comment is it takes people who:
can’t verbally reason without talking things through (and are currently stuck in a passive role in a conversation)
and who:
respond to a failure of their verbal reasoning
under circumstances of importance (in this case moral importance)
and conditions of stress, induced by
trying to concentrate while in a passive role
failing to concentrate under conditions of high moral importance
by simply doing as they are told—and it assumes they are incapable of reasoning under any circumstances.
It also then denies people who are incapable of independent reasoning the right to be protected from harm.
EDIT: Ben is correct to say we should taboo “crazy.”
This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought. (entirely wrong)
I also don’t think people interpret Vassar’s words as a strategy and implement incoherence. Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don’t know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)
Beyond this, I think your model is accurate.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.
Thank you for echoing common sense!
What is psychological collapse?
For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion, continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don’t want to do.
Are you trying to point to something else?
What specific claims turned out to be false? What counterevidence did you encounter?
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person’s language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Didn’t work.
I think this needs to be broken up into 2 claims:
1 If we execute strategy X, we’ll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.
2 has been falsified decisively. The plan to recruit candidates via appealing to people’s explicit incentives failed, there wasn’t a good alternative, and as a result there wasn’t a chance to test other parts of the plan (1).
That’s important info and worth learning from in a principled way. Definitely I won’t try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they’re already doing this, as long as I don’t have to count on other unknown people acting similarly in the future.
But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, “see? novel multi-step plans don’t work!” extremely annoying. I’ve been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of “we / someone else decided not to try” as a different kind of failure from “we tried and it didn’t work out.”
This is actually completely fair. So is the other comment.
This seems to be conflating the question of “is it possible to construct a difficult problem?” with the question of “what’s the rate-limiting problem?”. If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I’d very much like to hear the details. If I’m persuaded I’ll be interested in figuring out how to help.
So far this seems like evidence to the contrary, though, as it doesn’t look like you thought you could get help making things better for many people by explaining the opportunity.
To the extent I’m worried about Vassar’s character, I am as equally worried about the people around him. It’s the people around him who should also take responsibility for his well-being and his moral behavior. That’s what friends are for. I’m not putting this all on him. To be clear.
I think it’s a fine way of think about mathematical logic, but if you try to think this way about reality, you’ll end up with views that make internal sense and are self-reinforcing but don’t follow the grain of facts at all. When you hear such views from someone else, it’s a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: “this thing about power was true in 10th century Byzantium, but not clear how much of it applies today”.
Also, just to comment on this:
I think it’s somewhat changeable. Even for people like us, there are ways to make our processing more “fuzzy”. Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the “warm fuzzy” level, it’s not nearly so cold a place as it seems, and plugging into that market is so worth it.
On the third paragraph:
I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)
Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See “Safety in numbers” by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)
I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.
I sometimes round things, it is not inherently bad.
Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.
On the second paragraph:
This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.
Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is “this is true everywhere and false nowhere.” See “The Proper Use of Humility,” and for an example of how delineations often should be large, “Universal Fire.”
On the first paragraph:
Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal.
Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of “the world is evil” otherwise it runs against facts. But the natural mental motion you make, as a default, should be, “How is this system produced by an aggressively neutral, entirely mechanistic reality?”
See the entire Sequence on evolution, as well as Beyond the Reach of God.
I mostly see where you’re coming from, but I think the reasonable answer to “point 1 or 2 is a false dichotomy” is this classic, uh, tumblr quote (from memory):
“People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail.”
This goes especially if the thing that comes after “just” is “just precommit.”
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don’t know if they’re correct, but I’d expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we’d all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
This is a very good criticism! I think you are right about people not being able to “just.”
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on “vibe” and on the arguments that people are making, such as “argument from cult.”
I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called “rationalists.” This comes off as sarcastic but I mean it completely literally.
Precommitting isn’t easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as “five minutes of actually trying” and alkjash’s “Hammertime.” Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.
Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.
They have a physiological problem.
They don’t believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of “exercise increases energy and happiness set point.”
They are fit.
Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don’t have to take Heroic Responsibility for the world, but you have to take it about yourself.)
A trope-y way of thinking about it is: “We’re supposed to be the good guys!” Good guys don’t have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.
I found many things you shared useful. I also expect that because of your style/tone you’ll get down voted :(
It’s not just Vassar. It’s how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn’t to ignore him but to destroy his agency entirely. He’s still going to alter his decision theory towards rape even if castrated.
I think you are entirely wrong.
However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.
Can we have LessWrong not be Reddit? Let’s not be Reddit. Too late, we’re already Reddit. Fuck.
You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.
-
Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.
Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don’t. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it “divine intervention.”
There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won’t rape people, but you won’t report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this “swordfighting at the edge of a cliff while shouting about our ideologies.” I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.
If you use the “shoot him” strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn’t cooperating with Omegarapist, it’s thinking to oneself “he’s too useful to actually follow precommitments about punishing” if he defects against you. This is fucking dumb. There’s a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn’t pretty, and it’s also a very accurate depiction of the real world landscape.
Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.
(FYI, the OP has 154 votes and 59 karma, so it is both heavily upvoted and heavily downvoted.)
You absolutely have a reason to believe the article is worth reading.
If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.
I read the linked article, and my conclusion is that it’s not even in the neighborhood of “worth reading”.
I don’t think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.
However, that’s not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).
I think the policy I follow (although I hadn’t made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.
Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn’t a massive update in the end, but it also wasn’t negligible. I also haven’t downvoted the OP, and I believe I also haven’t downvoted any comments from jessicata. I’ve upvoted some.
This is fair, actually.
...and then pushing them.
So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
It might not be nefarious.
But it might also not be very wise.
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.
Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR’s enviroment without any mentioning of that part.
When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I’d love whether anyone who’s nearer can confirm/deny the rumor and fill in missing pieces.
As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).
I, too, asked people questions after that incident and failed to locate any evidence of drugs.
As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don’t think anyone is to blame for his having had a mental break in the first place.
I now got some better sourced information from a friend who’s actually in good contact with Eric. Given that I’m also quite certain that there were no drugs involved and that isn’t a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I’m currently hoping that Eric will tell his side himself so that there’s less indirection about the information sourcing so I’m not saying more about the detail at this point in time.
Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.
It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.
My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect or the onset of psychosis.
My delusions prominently featured Vassar. At the time of my arrest, I had a notebook with multiple entries stating “Vassar is God” and “Vassar is the Devil.” This fixation partly stemmed from a conversation with Vassar, where he suggested that my “pattern must be erased from the world” in response to my defense of EA. However, it was primarily fueled by the indirect influence of someone from his group with whom I had more substantial contact.
This individual was deeply involved in a psychological engagement with me in the months leading to my psychotic episode. In my weakened state, I was encouraged to develop and interact with a mental model of her. She once described our interaction as “roleplaying an unfriendly AI,” which I perceived as markedly hostile. Despite the negative turn, I continued the engagement, hoping to influence her positively.
After joining Vassar’s group, I urged her to critically assess his intense psychological methods. She relayed a conversation with Vassar about “fixing” another individual, Anna (Salamon), to “see material reality” and “purge her green.” This exchange profoundly disturbed me, leading to a series of delusions and ultimately exacerbating my psychological instability, culminating in a psychotic state. This descent into madness continued for approximately 36 hours, ending with an attempted suicide and an assault on a mental health worker.
Additionally, it is worth mentioning that I visited Leverage on the same day. Despite exhibiting clear signs of delusion, I was advised to exercise caution with psychological endeavors. Ideally, further intervention, such as suggesting professional help or returning me to my friends, might have been beneficial. I was later informed that I was advised to return home, though my recollection of this is unclear due to my mental state at the time.
In the hotel that night, my mental state deteriorated significantly after I performed a mental action which I interpreted as granting my mental model of Vassar substantial influence over my thoughts, in an attempt to regain stability.
While there are many more intricate details to this story, I believe the above summary encapsulates the most critical elements relevant to our discussion.
I do not attribute direct blame to Vassar, as it is unlikely he either intended or could have reasonably anticipated these specific outcomes. However, his approach, characterized by high-impact psychological interventions, can inadvertently affect the mental health of those around him. I hope that he has recognized this potential for harm and exercises greater caution in the future.
Thank you for sharing such personal details for the sake of the conversation.
Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
If I’m trying to put my finger on a real effect here, it’s related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more “social/business development/management” end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).
As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.
2017 would be the year Eric’s episode happened as well. Did this result in multiple conversation about “Michael Vassar is God” that Eric might then picked up when he hang around the group?
I don’t know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn’t causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
I haven’t used the word god myself nor have heard it used by other people to refer to someone who’s insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
Can someone please clarify what is meant in this conext by ‘Vassar’s group’, or the term ‘Vassarites’ used by others?
My intution previously was that Michael Vassar had no formal ‘group’ or insitution of any kind, and it was just more like ‘a cluster of friends who hung out together a lot’, but this comment makes it seem like something more official.
While “Vassar’s group” is informal, it’s more than just a cluster of friends; it’s a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like “the AI safety community” or “wokeness” or “the startup scene” that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I’ve ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.
Median Group is the closest thing to a “Vassarite” institution, in that its listed members are 2⁄3 people who I’ve heard/read describing the strong influence Vassar has had on their thinking and 1⁄3 people I don’t know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn’t claim to speak for the whole scene or anything.
As a member of that cluster I endorse this description.
Michael and I are sometimes-housemates and I’ve never seen or heard of any formal “Vassarite” group or institution, though he’s an important connector in the local social graph, such that I met several good friends through him.
Thank you very much for sharing. I wasn’t aware of any of these details.
It sounds like you’re saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.
ETA: In case it wasn’t clear, “that” = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric’s account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.
Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It’s not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.
To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.
(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)
I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.
(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the “Vassarites”, or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)
If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer. I’m not going to make my comments longer without a specific idea of what’s unclear, that seems pointless.
It is accurate to state that I constructed a model of him based on limited information, which subsequently contributed to my dramatic psychological collapse. Nevertheless, the reason for developing this particular model can be attributed to his interactions with me and others. This was not due to any extraordinary or mystical abilities, but rather his profound commitment to challenging individuals’ perceptions of conventional reality and mastering the most effective methods to do so.
This approach is not inherently negative. However, it must be acknowledged that for certain individuals, such an intense disruption of their perceived reality can precipitate a descent into a detrimental psychological state.
Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn’t mean to; sorry about that.
The thing I meant to characterize as “crazy cult behavior” was people in the comments here attributing things like what you did in your mind to Michael Vassar’s spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.
This can be unpacked into an alternative to the charisma theory.
Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There’s sufficient excess demand that even if someone doesn’t issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.
A more culturally central example than Vassar is Dr Fauci, who seems to have mostly reasonable opinions about COVID, but is worshipped by a lot of fanatics with crazy beliefs about COVID.
The charisma hypothesis describes this as a fundamental attribute of the person being worshipped, rather than a behavior of their worshippers.
If this information isn’t too private, can you send it to me? scott@slatestarcodex.com
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.
I feel pretty defensive reading and responding to this comment, given a previous conversation with Scott Alexander where he said his professional opinion would be that people who have had a psychotic break should be on antipsychotics for the rest of their life (to minimize risks of future psychotic breaks). This has known severe side effects like cognitive impairment and brain shrinkage and lacks evidence of causing long-term improvement. When I was on antipsychotics, my mental functioning was much lower (noted by my friends) and I gained weight rapidly. (I don’t think short-term use of antipsychotics was bad, in my case)
It is in this context that I’m reading that someone talking about the possibility of mental subprocess implantation (“demons”) should be “treated as a psychological emergency”, when the Eric Bryulant case had already happened, and talking about the psychological processes was necessary for making sense of the situation. I feared involuntary institutionalization at the time, quite a lot, for reasons like this.
If someone expresses opinions like this, and I have reason to believe they would act on them, then I can’t believe myself to have freedom of speech. That might be better than them not sharing the opinions at all, but the social structural constraints this puts me under are obvious to anyone trying to see them.
Given what happened, I don’t think talking to a normal therapist would have been all that bad in 2017, in retrospect; it might have reduced the overall amount of psychiatric treatment needed during that year. I’m still really opposed to the coercive “you need professional help” framing in response to sharing weird thoughts that might be true, instead of actually considering them, like a Bayesian.
I don’t remember the exact words in our last conversation. If I said that, I was wrong and I apologize.
My position is that in schizophrenia (which is a specific condition and not just the same thing as psychosis), lifetime antipsychotics might be appropriate. EG this paper suggests continuing for twelve months after a first schizophrenic episode and then stopping and seeing how things go, which seems reasonable to me. It also says that if every time you take someone off antipsychotics they become fully and dangerous psychotic again, then lifetime antipsychotics are probably their best bet. In a case like that, I would want the patient’s buy-in, ie if they were medicated after a psychotic episode I would advise them of the reasons why continued antipsychotic use was recommended in their case, if they said they didn’t want it we would explore why given the very high risk level, and if they still said they didn’t want it then I would follow their direction.
I didn’t get a chance to talk to you during your episode, so I don’t know exactly what was going on. I do think that psychosis should be thought of differently than just “weird thoughts that might be true”, as more of a whole-body nerve-and-brain dysregulation of which weird thoughts are just one symptom. I think in mild psychosis it’s possible to snap someone back to reality where they agree their weird thoughts aren’t true, but in severe psychosis it isn’t (I remember when I was a student I tried so hard to convince someone that they weren’t royalty, hours of passionate debate, and it just did nothing). I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom. Analogy to eg someone having chest pain from a heart attack, and you give them painkillers for the pain but don’t treat the heart attack.
(although there’s a separate point where it would be wrong and objectifying to falsely claim someone who’s just thinking differently is psychotic or pre-psychotic, given that you did end up psychotic it doesn’t sound like the people involved were making that mistake)
My impression is that some medium percent of psychotic episodes end in permanent reduced functioning, and some other medium percent end in suicide or jail or some other really negative consequence, and this is scary enough that treating it is always an emergency, and just treating the symptom but leaving the underlying condition is really risky.
I agree many psychiatrists are terrible and that wanting to avoid them is a really sympathetic desire, but when it’s something really serious like psychosis I think of this as like wanting to avoid surgeons (another medical profession with more than its share of jerks!) when you need an emergency surgery.
Ok, the opinions you’ve described here seem much more reasonable than what I remember, thanks for clarifying.
I agree, yes. I think what I was afraid of at the time was being called crazy and possibly institutionalized for thinking somewhat weird thoughts that people would refuse to engage with, and showing some signs of anxiety/distress that were in some ways a reaction to my actual situation. By the time I was losing sleep etc, things were quite different at a physiological level and it made sense to treat the situation as a psychiatric emergency.
If you can show someone that they’re making errors that correspond to symptoms of mild psychosis, then telling them that and suggesting corresponding therapies to help with the underlying problem seems pretty reasonable.
Thanks, if you meant that, when someone is at a very early stage of thinking strange things, you should talk to them about it and try to come to a mutual agreement on how worrying this is and what the criteria would be for psych treatment, instead of immediately dehumanizing them and demanding the treatment right away, then I 100% agree.
If psychosis is caused by an underlying physiological/biochemical process, wouldn’t that suggest that e.g. exposure to Leverage Research wouldn’t be a cause of it?
If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?
I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that’s true, I’d expect changing someone’s environment to be more helpful for the former sort of case.
[probably old-hat [ETA: or false], but I’m still curious what you think] My (background unexamined) model of psychosis-> schizophrenia is that something, call it the “triggers”, sets a person on a trajectory of less coherence / grounding; if the trajectory isn’t corrected, they just go further and further. The “triggers” might be multifarious; there might be “organic” psychosis and “psychic” psychosis, where the former is like what happens from lead poisoning, and the latter is, maybe, what happens when you begin to become aware of some horrible facts. If your brain can rearrange itself quickly enough to cope with the newly known reality, your trajectory points back to the ground. If it can’t, you might have a chain reaction where (1) horrible facts you were previously carefully ignoring, are revealed because you no longer have the superstructure that was ignore-coping with them; (2) your ungroundedness opens the way to unepistemic beliefs, some of which might be additionally horrifying if true; (3) you’re generally stressed out because things are going wronger and wronger, which reinforces everything.
If this is true, then your statement:
is only true for some values of “guide them back to reality-based thoughts”. If you’re trying to help them go back to ignore-coping, you might partly succeed, but not in a stable way, because you only pushed the ball partway back up the hill, to mix metaphors—the ball is still on a slope and will roll back down when you stop pushing, the horrible fact is still revealed and will keeping being horrifying. But there’s other things you could do, like helping them find a non-ignore-cope for the fact; or show them enough that they become convinced that the belief isn’t true.
There is this basic idea (I think from an old blogpost that Eliezer wrote) that if someone says there are goblins in the closet, dismissing them outright is confusing rationality with trust in commonly held claims, whereas the truly rational thing is to just open the closet and look.
I think this is correct in principle but not applicable in many real-world cases. The real reason why even rational people routinely dismiss many weird explanations for things isn’t that they have sufficient evidence against them, it’s that the weird explanation is inconsistent with a large set of high confidence beliefs that they currently hold. If someone tells me that they can talk to their deceased parents, I’m probably not going to invest the time to test whether they can obtain novel information this way; I’m just going to assume they’re delusional because I’m confident spirits don’t exist.
That said, if that someone helped write the logical induction paper, I personally would probably hear them out regardless of how weird the thing sounds. Nonetheless, I think it remains true that dismissing beliefs without considering the evidence is often necessary in practice.
This is failing to track ambiguity in what’s being refered to. If there’s something confusing happening—something that seems important or interesting, but that you don’t yet have words to well-articulate it—then you try to say what you can (e.g. by talking about “demons”). In your scenario, you don’t know exactly what you’re dismissing. You can confidently dismiss, in the absence of extraordinary evidence, that (1) their parents’s brains have been rotting in the ground, and (2) they are talking with their parents, in the same way you talk to a present friend; you can’t confidently dismiss, for example, that they are, from their conscious perspective, gaining information by conversing with an entity that’s naturally thought of as their parents (which we might later describe as, they have separate structure in them, not integrated with their “self”, that encoded thought patterns from their parents, blah blah blah etc.). You can say “oh well yes of course if it’s *just a metaphor* maybe I don’t want to dismiss them”, but the point is that from a partially pre-theoretic confusion, it’s not clear what’s a metaphor and it requires further work to disambiguate what’s a metaphor.
As the joke goes, there’s nothing crazy about talking to dead people. When dead people respond, then you start worrying.
Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.
This is really, really serious. If this happened to someone closer to me I’d be out for blood, and probably legal prosecution.
Let’s not minimize how fucked up this is.
Olivia, Devi and I all talked to people other than Michael Vassar, such as Anna Salamon. We gravitated towards the Berkeley community, which was started around Eliezer’s writing. None of us are calling for blame, ostracism, or cancelling of Michael. Michael helped all of us in ways no one else did. None of us have a motive to pursue a legal case against him. Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.
The sentence is also misleading given Devi didn’t detransition afaik.
Jessicata, I will be blunt here. This article you wrote was [EDIT: expletive deleted] misleading. Perhaps you didn’t do it on purpose; perhaps this is what you actually believe. But from my perspective, you are an unreliable narrator.
Your story, original version:
I worked for MIRI/CFAR
I had a psychotic breakdown, and I believed I was super evil
the same thing also happened to a few other people
conclusion: MIRI/CFAR is responsible for all this
Your story, updated version:
I worked for MIRI/CFAR
then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil
I actually used the drugs
I had a psychotic breakdown, and I believed I was super evil
the same thing also happened to a few other people
conclusion: I still blame MIRI/CFAR, and I am trying to downplay Vassar’s role in this
If you can’t see how these two stories differ, then… I don’t have sufficiently polite words to describe it, so let’s just say that to me these two stories seem very different.
Lest you accuse me of gaslighting, let me remind you that I am not doubting any of the factual statements you made. (I actually tried to collect them here, to separate them from the long stream of dark insinuations.) What I am saying is that you omitted a few “details”, which perhaps seem irrelevant to you, but in my opinion fundamentally change the meaning of the story.
At this moment, we just have to agree to disagree, I guess.
In my opinion, the greatest mistake MIRI/CFAR made in this story, was being associated with Michael Vassar in the first place (and that’s putting it mildly; at some moment it seemed like Eliezer was in love with him, he so couldn’t stop praising his high intelligence… well, I guess he learned that “alignment is more important than intelligence” applies not just to artificial intelligences but also to humans), providing him social approval and easy access to people who then suffered as a consequence. They are no longer making this mistake. Ironically, now it’s you, after having positioned yourself as a victim, who is blinded by his intelligence, and doesn’t see the harm he causes. But the proper way to stop other people from getting hurt is to make it known that listening too much to Vassar does this, predictably. So that he can no longer use the rationalist community as a “social proof” to get people’s trust.
EDIT: To explain my unkind words “after having positioned yourself as a victim”, the thing I am angry about is that you publicly describe your suffering as a way to show people that MIRI/CFAR is evil. But when it turns of that Michael Vassar is more directly responsible for it, suddenly the angle changes and he actually “helped you”.
So could you please make up your mind? Is having a psychotic breakdown and spending a few weeks catatonic in hospital a good thing or a bad thing? Is it trauma, or is it jailbreaking? Because it seems like you call it bad when you attribute it to MIRI/CFAR, but when other people suggest that Vassar was responsible, then it seems a bit like no big deal, definitely not anything to blame him for.
I could be very wrong, but the story I currently have about this myself is that Vassar himself was a different and saner person before he used too much psychedelics. :( :( :(
Non-agenda’d question: about when did you notice changes in him?
My autobiographical episodic memory is nowhere near good enough to answer this question, alas.
Do you have a timeline of when you think that shift happened? That might make it easier for other people who knew Vassar at the time to say whether their observation matched yours.
That… must have hurt a lot.
(I hope your story is right.)
I saw some him make some questionable drug use decisions at Burning Man in 2011 and 2012, including larger than normal doses, and I don’t think I saw all of it.
A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it’s typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.
Could you expand more on this? E.g. what are a couple sentences in the post that seem most trying to show this.
I appreciate the thrust of your comment, including this sentence, but also this sentence seems uncharitable, like it’s collapsing down stuff that shouldn’t be collapsed. For example, it could be that the MIRI/CFAR/etc. social field could set up (maybe by accident, or even due to no fault of any of the “central” people) the conditions where “psychosis” is the best of the bad available options; in which case it makes sense to attribute causal fault to the social field, not to a person who e.g. makes that clear to you, and therefore more proximal causes your breakdown. (Of course there’s disagreement about whether that’s the state of the world, but it’s not necessarily incoherent.)
I do get the sense that jessicata is relating in a funny way to Michael Vassar, e.g. by warping the narrative around him while selectively posing as “just trying to state facts” in relation to other narrative fields; but this is hard to tell, since it’s also what it might look like if Michael Vassar was systematically scapegoated, and jessicata is reporting more direct/accurate (hence less bad-seeming) observations.
Where did jessicata corroborate this sentence “then Michael Vassar taught me that everyone is super evil, including CFAR/MIRI, and told me to use drugs in order to get a psychotic breakdown and liberate myself from evil” ?
I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn’t see that as an unqualified endorsement—though I think your general message should be signal-boosted.
The claim that Michael Vassar is substantially like Quirrell seems to me strange. Where did you get the claim that Eliezer modelled Vassar after Quirrell?
To make the claim a bit more based on public data, take Vassar’s TedX talk. I think it gives a good impression of how Vassar thinks. There are some official statistics that claim for Jordan that life expectancy, so I think there’s a good chance that Vassar here actually believes what he says.
If you however look deeper then Jordan’s life expectancy is not as high as is asserted by Vassar. Given that the video is in the public record that’s an error that everybody can find who tries to check what Vassar is saying. I don’t think it’s in Vassar’s interest to give a public talk like that with claims that are easily found to be wrong by factchecking. Quirrell wouldn’t have made an error like this but is a lot more controlled.
Eliezer made Vassar president of the precursor of MIRI. That’s a strong signal of trust and endorsement.
https://yudkowsky.tumblr.com/writing/empathyrespect
Eliezer has openly said Quirrell’s cynicism is modeled after a mix of Michael Vassar and Robin Hanson.
I appreciate you’re telling me this given that you believe it. I definitely am in some ways, and try to improve over time.
I said in the text that (a) there were conversations about corruption in EA institutions, including about the content of Ben Hoffman’s posts, (b) I was collaborating with Michael Vassar at the time, (c) Michael Vassar was commenting about social epistemology. I admit that connecting points (a) and (c) would have made the connection clearer, but it wouldn’t have changed the text much.
In cases where someone was previously part of a “cult” and later says it was a “cult” and abusive in some important ways, there has to be a stage where they’re thinking about how bad the social context was, and practically always, that involves conversations with other people who are encouraging them to look at the ways their social context is bad. So my having conversations where people try to convince me CFAR/MIRI are evil is expected given what else I have written.
Besides this, “in order to get a psychotic breakdown” is incredibly false about his intentions, as Zack Davis points out.
This was not in the literally initial version of the post but was included within a few hours, I think, when someone pointed out to me that it was relevant.
As I pointed out, this doesn’t obviously attribute less “spooky mind powers” to Michael Vassar compared with what Leverage was attributing to people, where Leverage attributing this (and isolating people from each other on the basis of it) was considered crazy and abusive. Maybe he really was this influential, but logical consistency is important here.
In this comment I’m saying he has an unclear and probably low amount of responsibility, so this is a misread.
I was pretty clear in the text that there were trauma symptoms resulting from these events and they also had advantages such as gaining a new perspective, and that overall I don’t regret working at MIRI. I was also clear that there are relatively better and worse social contexts in which to experience psychosis symptoms, and hospitalization indicates a relatively worse social context.
What I’m saying is that the Berkeley community should be.
Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.
I’m not going to comment on drug usage in detail for legal reasons, except to note that there are psychedelics legal in some places, such as marijuana in CA.
It doesn’t make sense to attribute unique causal responsibility for psychotic breaks to anyone, except maybe to the person it’s happening to. There are lots of people all of us were talking to in that time period who influenced us, and multiple people were advocating psychedelic use. Not all cases happened to people who were talking significantly with Michael around the time. As I mentioned in the OP, as I was becoming more psychotic, people tried things they thought might help, which generally didn’t, and they could have done better things instead. Even causal responsibility doesn’t imply blame, e.g. Eliezer had some causal responsibility due to writing things that attracted people to the Berkeley scene where there were higher-variance psychological outcomes. Michael was often talking with people who were already “not ok” in important ways, which probably affects the statistics.
Please see my comment on the grandparent.
I agree with Jessica’s general characterization that this is better understood as multi-causal rather than the direct cause of actions by one person.
Relevant bit of social data: Olivia is the most irresponsible-with-drugs person I’ve ever met, by a sizeable margin; and I know of one specific instance (not a person named in your comment or any other comments on this post) where Olivia gave someone an ill-advised drug combination and they had a bad time (though not a psychotic break).
I don’t remember specific names, but something similar happened at one of the first rationality minicamps. Technically, this was not about drugs but some supplements (i.e. completely legal things), but there was someone mixing various kinds of powders and saying “yeah, trust me, I have a lot of experience with this, I did a lot of research, it is perfectly safe to take a dose this high, really”, and then an ambulance had to be called.
So, I assume you meant that Olivia goes even far beyond this, right?
My memory of the RBC incident you’re referring to was that it wasn’t supplements that did it, it was a caffeine overdose from energy drinks leading into a panic attack. But there were certainly a lot of supplements around and they could’ve played a role I didn’t know about.
When I say that I believe Olivia is irresponsible with drugs, I’m not excluding the unscheduled supplements, but the story I referred to involved the scheduled kind.
I’ve posted an edit/update above after talking to Vassar.
A question for the ‘Vassarites’, if they will: were you doing anything like the “unihemispheric sleep” exercise (self-inducing hallucinations/dissociative personalities by sleep deprivation) the Zizians are described as doing?
No. All sleep deprivation was unintentional (anxiety-induced in my case).
If you make bans like these it would be worth to communicate them to the people organizing SSC meetups. Especially, when making bans for safety reasons of meetup participants not communicating those bans seems very strange to me.
Vassar lived a while after he left the Bay Area in Berlin and for decisions whether or not to make an effort to integrate someone like him (and invite him to LW and SSC meetups) such kind of information is valuable and Bay people not sharing it but claiming that they do anything that would work in practice like a ban feels misleading.
I think Vassar left the Bay area more then a year before COVID happened. As far as I remember his stated reasoning was something along the lines of everyone in the Bay Area getting mindkilled by leftish ideology.
It was on the Register of Bans, which unfortunately went down after I deleted the blog. I admit I didn’t publicize it very well because this was a kind of sensitive situation and I was trying to do it without destroying his reputation.
If there are bans that are supposed to be enforced, mentioning that in the mails that go out to organizers for a ACX everywhere event would make sense. I’m not 100% sure that I got all the mails because Ruben forwarded mails for me (I normally organize LW meetups in Berlin and support Ruben with the SSC/ACX meetups), but in those there was no mention of the word ban.
I don’t think it needs to be public but having such information in a mail like the one Aug 23 would likely to be necessary for a good portion of the meetup organizers to know that there an expectation that certain people aren’t welcome.
https://www.lesswrong.com/posts/iWWjq5BioRkjxxNKq/michael-vassar-at-the-slatestarcodex-online-meetup seems to have happened after that point in time. Vassar not only attended a Slate Star Codex but was central in it and presenting his thoughts.
I organized that, so let me say that:
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
It seems to me that despite organizing multiple SSC events you had no knowledge that Vassar was banned from SSC events. Neither had anyone reading the event anouncement to the extend that they would tell you that Vassar was banned before the event happened.
To me that suggests that there’s a problem of not sharing information about who’s banned to those organizing meetups in an effective way, so that a ban has the consequence one would expect it to have.
It might be useful to have a global blacklist somewhere. Possible legal consequences, if someone decides to sue you for libel. (Perhaps the list should only contain the names, not the reasons?)
EDIT: Nevermind. There are more things I would like to say about this, but this is not the right place. Later I may write a separate article explaining the threat model I had in mind.
Legal threats matter a great deal for what can be done in a situation like this.
When it comes to a “global blacklist” there’s the question about governance. Who decides who’s on and who isn’t. When it comes to SSC or ACX meetups the governance question is clear. Anybody who’s organizing a meetup under those labels should follow Scott’s guidance.
That however only works if that information is communicated to meetup organizers.
So, it’s been a long time since I actually commented on Less Wrong, but since the conversation is here...
Hearing about this is weird for me, because I feel like, compared to the opinions I heard about him from other people in the community, I kind of… always had uncomfortable feelings about Mike Vassar? And I say this without having had direct personal contact with him except, IIRC, maybe one meetup I attended where he was there and we didn’t talk directly, although we did occasionally participate in some of the same conversations online.
By all accounts, it sounds like he’s always been quite charismatic in person, and this isn’t the first time I’ve heard someone describe him as a “wizard.” But empirically, there are some people who’re very charismatic who propagate some really bad ideas and whose impacts on the lives of people around them, or on society at large, can be quite negative. As of last I was paying attention to him, I wouldn’t have expected Mike Vassar to have that negative an effect on the lives of the people around him, but I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken. He evoked in a lot of people that feeling of “if these ideas are true, this is really huge,” but… there’s no shortage of ideas of ideas you can say that about, and I was always confused by the degree of credence people gave that his ideas were worth taking seriously. He always gave me a cult leaderish impression, in a way that, say, Eliezer never did, as encouraging other people to take seriously ideas which I couldn’t understand why they didn’t treat with more skepticism.
I haven’t thought about him in quite some time now, but I still distinctly remember that feeling of “why do these smart people around me take this person so seriously? I just don’t see how his explanations of his ideas justify that.”
I met Vassar once. He came across as extremely charismatic (with a sort of charisma that probably only works on a particular type of people, which includes me), creating the impression of saying wise and insightful things (especially if you lack relevant domain knowledge), while in truth he was saying a lot of stuff which was patently absurd. Something about his delivery was so captivating, that it took me a while to “shake off the fairy dust” and realize just how silly some of his claims were, even when it should have been obvious from the start. Moreover, his worldview seemed heavily based on paranoidal / conspiracy-theory type of thinking. So, yes, I’m not too surprised by Scott’s revelations about him.
Yeah, it definitely didn’t work on me. I believe I wrote this thread shortly after my one-and-only interaction with him, in which he said a lot of things that made me very skeptical but that I couldn’t easily refute, or had much time to think about before he would move on to some other topic. (Interestingly, he actually replied in that thread even though I didn’t mention him by name.)
It saddens me to learn that his style of conversation/persuasion “works” on many people who otherwise seem very smart and capable (and even self-selected for caring about being rational). It seems like pretty bad news as far as what kind of epistemic situation humanity is in (e.g., how easily we will be manipulated by even slightly-smarter-than-human AIs / human-AI systems).
Oh, this is because the OP that I was replying to did mention him by name:
Heh, the same feeling here. I didn’t have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn’t reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.
Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it’s all gibberish to me.
Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.
Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)
My probability distribution was gradually shifting from 1 to 3.
Not a direct response to you, but if anyone who hasn’t talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it’ll have a fair bit in it that’ll probably still seem false/confusing), you might try Spencer Greenberg’s podcast with Vassar.
As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he’s saying. I certainly did not fully succeed.
My notes.
It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?
I would really like to understand what he’s getting at by the way, so if it is clearer for you than it is for me, I’d actively appreciate clarification.
i tried reading / skimming some of that summary
it made me want to scream
what a horrible way to view the world / people / institutions / justice
i should maybe try listening to the podcast to see if i have a similar reaction to that
Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.
In Harry Potter the standard practice seems to be to “eat chocolate” and perhaps “play with puppies” after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.
Then there is Gendlin’s Litany (and please note that I am linking to a critique, not to unadulterated “yay for the litany” ideas) which I believe is part of Lesswrong’s canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.
The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is.
Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and “ethical”?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to “reliably and safely accomplish the goals” (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between “the status quo” and “a world where the goal has been accomplished”… thus, the litany itself:
In my personal experience, as a person with feelings, is that I can only work on “the hot stuff” mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical “gravity well” of perspectives like this, which have internal logic that “makes as if to demand” that the perspective not be dropped, except maybe “at one’s personal peril”.
Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.
Another great option is “talk about it with your wisest and most caring grand parent (or parent)”.
Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution.
Also, you don’t have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?
Personally, I try not to put “ideas that seem particularly hot” on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.
However also, I don’t consider a given forum to be “the really real forum, where the grownups actually talk”… unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).
This leads me to be curious about any second thoughts or second feelings you’ve had, but only if you feel ok sharing them in this forum. Could you perhaps reply with:
<silence> (a completely valid response, in my book)
”Mu.” (that is, being still in the space, but not wanting to pose or commit)
”The ideas still make me want to scream, but I can afford emitting these ~2 bits of information.” or
“I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here’s what’s left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>”.
There’s also these 2 podcasts which cover quite a variety of topics, for anyone who’s interested:
You’ve Got Mel—With Michael Vassar
Jim Rutt Show—Michael Vassar on Passive-Aggressive Revolution
I haven’t seen/heard anything particularly impressive from him either, but perhaps his ‘best work’ just isn’t written down anywhere?
My impression as an outsider (I met him once and heard and read some things people were saying about him) was that he seemed smart but also seemed like kind of a kook...
I have replied to this comment in a top-level post.
Ziz’s perspective here gives you a pretty detailed example of how this social trick works (i.e. spontaneously pretend something someone else did was objectionable and use it as an excuse to make a fit/leave to make the other person walk on eggshells or chase you).
Since comments get occluded you should refer to an edit/update somewhere at the top if you want it to be seen by those who already read your original comment.
Is this the highest rated comment on the site?