I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don’t think you’re being fair.
“jailbreak” yourself from it (I’m using a term I found on Ziz’s discussion of her conversations with Vassar; I don’t know if Vassar uses it himself)
I’m confident this is only a Ziz-ism: I don’t recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.
again, this involves making them paranoid about MIRI/CFAR and convincing them to take lots of drugs [...] describing how it was a Vassar-related phenomenon
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
borderline psychosis, which the Vassarites mostly interpreted as success (“these people have been jailbroken out of the complacent/conformist world, and are now correctly paranoid and weird”)
I can’t speak for Michael or his friends, and I don’t want to derail the thread by going into the details of my own situation. (That’s a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there’s a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it’s a cult), having a mental breakdown is an understandable reaction. It’s not that mental breakdowns are in any way good—in a saner world, that wouldn’t happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a “deep emotional break with the wisdom of [your] pack”, the mental breakdown might actually be less bad in the long run, even if it’s locally extremely bad.
My main advice is that if he or someone related to him asks you if you want to take a bunch of drugs and hear his pitch for why the world is corrupt, you say no.
I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)
ZD said Vassar broke them out of a mental hospital. I didn’t ask them how.
(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn’t come—but I was discharged normally; he didn’t bust me out.)
I don’t want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn’t harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I’m suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their “it’s correct to be freaking about learning your entire society is corrupt and gaslighting” shtick.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
[...]
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I more or less Outside View agree with you on this, which is why I don’t go around making call-out threads or demanding people ban Michael from the community or anything like that (I’m only talking about it now because I feel like it’s fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) “This guy makes people psychotic by talking to them” is a silly accusation to go around making, and I hate that I have to do it!
But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.
I think the minimum viable narrative here is, as you say, something like “Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs.” Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can’t trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the “he’s just having normal truth-seeking conversation” objection. He also seems really good at pushing trans people’s buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don’t know how it happens, I’m sufficiently embarrassed to be upset about something which looks like “having a nice interesting conversation” from the outside, and I don’t want to violate liberal norms that you’re allowed to have conversations—but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.
Maybe one analogy would be people with serial emotional abusive relationships—should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you’ve got to at least leave that possibility open for when things get really weird.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can’t remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, “Huh, I need to only be convinced by true things.”
This is extremely relatable to my lived experience. I am a stereotypical “high-functioning autist.” I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.
To the degree that “rationality styles” are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.
Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.
Thing 1:
Imagine two world models:
Some people want to act as perfect nth-order cooperating utilitarians, but can’t because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: “Stop rationalizing.” Then the humans revert to the all-consuming anguish.
A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.
Which of these world models is correct? Both, obviously, because we’re all smart people here and understand the Machiavellian Intelligence Hypothesis.
Thing 2:
Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)
You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?
Ignore him. This is good for AI-box reasons, but bad because you don’t learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.
Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.
1a. Precommit to only talk with him if he castrates himself first.
1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.
I made those in 1 minute of actually trying.
Returning to the object level, let us consider Michael Vassar.
Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.
1a. Vassar can participate but will be shunned if he talks about “drama” in the rationality community or its social structure.
1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.
2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry.
I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!
I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?
The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn’t rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.
You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don’t we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.
“Diversity of thought is good.”
“I have a diverse opinion on the merits of vaccination.”
“Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence.”
“When does diversity of thought lead to coercion or violence?”
“When I, or the WHO, say so. Shut up, prole.”
This is actually quite a few skulls, but everything has quite a few skulls. People die very often.
Thing 3:
Now let me address a counterargument:
Argument 1: “Vassar’s belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory.”
Here’s the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.
Argument 2: “The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They ‘logically deduce’ the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people’s current behavior and coerce them into giving up their agency.”
There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of “traditional living/wisdom” are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)
There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. “In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition.”
THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See “A formalist manifesto” by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of “legitimate information” or “self-locating information” to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])
The only real social epistemologies are of the form:
“Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence).”
Mine is particular is, “Free speech but no (intentionally and directly inciting panic or violence using falsehoods).”
To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off.
Thing 4:
Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.
Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz’s blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.
MIRI payed out to blackmail. There’s an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn’t actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I’m sorry but it’s true, anyways please write Arcane Ascension book 4.)
I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.
He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)
I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not as a club member.
What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).
Now I am significantly happier, more agentic, and more rational.
Thing 5
When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn’t supposed to be easy. Have you seen mathematical logic? (It’s my favorite field).
An example of an important idea that may come from Vassar, but is likely much older:
Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who “matter.” Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.
Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.
However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.
Benjamin Ross Hoffman’s blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.
Thing 6:
I’m almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.
Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.
These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called “actually listening to arguments.” When I’m debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.
Uh, thanks for reading, I hope this was coherent, have a nice day.
One note though: I think this post (along with most of the comments) isn’t treating Vassar as a fully real person with real choices. It (also) treats him like some kind of ‘force in the world’ or ‘immovable object’. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I’m glad you yourself were able to “With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life.”
But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are.
I think it’s pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that’s in his capacity, which I think is a lot.
“Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane.”
I might think this was a worthwhile tradeoff if I actually believed the ‘maybe insane’ part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn’t want to offer this mental training himself; that isn’t much of an excuse, in my book, to target people who are ‘close to the edge’ (where ‘edge’ might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them.
His service is well-intentioned, but he’s not doing it wisely and compassionately, as far as I can tell.
I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.
In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…
I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models.
If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren’t typical for the threat.
I am not sure how much ‘not destabilize people’ is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of “you are expected to behave better for status reasons look at my smug language”-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.
In the pathological case of Vassar, I think the naive strategy of “just say the thing you think is true” is still correct.
Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The “maybe insane” part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication.
My suggestion for Vassar is not to ‘try not to destabilize people’ exactly.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking “at” rather than talking “to” or “with”. The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things.
I expect this process could take a long time / run into issues along the way, and so I don’t think it should be rushed. Not expecting a quick change. But claiming there’s no available option seems wildly wrong to me. People aren’t fixed points and generally shouldn’t be treated as such.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher’s information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into).
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he’s pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he’s speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.
Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people.
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.
As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.
If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can’t be disassociated anymore, that’s very predicably going to have a negative effect on that prison guard.
Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard.
I think this line of discussion would be well served by marking a natural boundary in the cluster “crazy.” Instead of saying “Vassar can drive people crazy” I’d rather taboo “crazy” and say:
Many people are using their verbal idea-tracking ability to implement a coalitional strategy instead of efficiently compressing external reality. Some such people will experience their strategy as invalidated by conversations with Vassar, since he’ll point out ways their stories don’t add up. A common response to invalidation is to submit to the invalidator by adopting the invalidator’s story. Since Vassar’s words aren’t selected to be a valid coalitional strategy instruction set, attempting to submit to him will often result in attempting obviously maladaptive coalitional strategies.
People using their verbal idea-tracking ability to implement a coalitional strategy cannot give informed consent to conversations with Vassar, because in a deep sense they cannot be informed of things through verbal descriptions, and the risk is one that cannot be described without the recursive capacity of descriptive language.
Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it’s desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles’ reproductive cycle by resembling the moon too much.
EDIT: Ben is correct to say we should taboo “crazy.”
This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought. (entirely wrong)
I also don’t think people interpret Vassar’s words as a strategy and implement incoherence. Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don’t know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)
The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.
Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away.
What specific claims turned out to be false? What counterevidence did you encounter?
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person’s language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Specific claim: this is how to take over New York.
Didn’t work.
I think this needs to be broken up into 2 claims:
1 If we execute strategy X, we’ll take over New York.
2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.
2 has been falsified decisively. The plan to recruit candidates via appealing to people’s explicit incentives failed, there wasn’t a good alternative, and as a result there wasn’t a chance to test other parts of the plan (1).
That’s important info and worth learning from in a principled way. Definitely I won’t try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they’re already doing this, as long as I don’t have to count on other unknown people acting similarly in the future.
But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, “see? novel multi-step plans don’t work!” extremely annoying. I’ve been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of “we / someone else decided not to try” as a different kind of failure from “we tried and it didn’t work out.”
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
This seems to be conflating the question of “is it possible to construct a difficult problem?” with the question of “what’s the rate-limiting problem?”. If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I’d very much like to hear the details. If I’m persuaded I’ll be interested in figuring out how to help.
So far this seems like evidence to the contrary, though, as it doesn’t look like you thought you could get help making things better for many people by explaining the opportunity.
To the extent I’m worried about Vassar’s character, I am as equally worried about the people around him. It’s the people around him who should also take responsibility for his well-being and his moral behavior. That’s what friends are for. I’m not putting this all on him. To be clear.
I think it’s a fine way of think about mathematical logic, but if you try to think this way about reality, you’ll end up with views that make internal sense and are self-reinforcing but don’t follow the grain of facts at all. When you hear such views from someone else, it’s a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: “this thing about power was true in 10th century Byzantium, but not clear how much of it applies today”.
Also, just to comment on this:
It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype.
I think it’s somewhat changeable. Even for people like us, there are ways to make our processing more “fuzzy”. Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the “warm fuzzy” level, it’s not nearly so cold a place as it seems, and plugging into that market is so worth it.
I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)
Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See “Safety in numbers” by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)
I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.
I sometimes round things, it is not inherently bad.
Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.
On the second paragraph:
This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.
Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is “this is true everywhere and false nowhere.” See “The Proper Use of Humility,” and for an example of how delineations often should be large, “Universal Fire.”
On the first paragraph:
Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal.
Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of “the world is evil” otherwise it runs against facts. But the natural mental motion you make, as a default, should be, “How is this system produced by an aggressively neutral, entirely mechanistic reality?”
See the entire Sequence on evolution, as well as Beyond the Reach of God.
I mostly see where you’re coming from, but I think the reasonable answer to “point 1 or 2 is a false dichotomy” is this classic, uh, tumblr quote (from memory):
“People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail.”
This goes especially if the thing that comes after “just” is “just precommit.”
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don’t know if they’re correct, but I’d expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we’d all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
This is a very good criticism! I think you are right about people not being able to “just.”
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on “vibe” and on the arguments that people are making, such as “argument from cult.”
I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called “rationalists.” This comes off as sarcastic but I mean it completely literally.
Precommitting isn’t easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as “five minutes of actually trying” and alkjash’s “Hammertime.” Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.
Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.
They have a physiological problem.
They don’t believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of “exercise increases energy and happiness set point.”
They are fit.
Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don’t have to take Heroic Responsibility for the world, but you have to take it about yourself.)
A trope-y way of thinking about it is: “We’re supposed to be the good guys!” Good guys don’t have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.
It’s not just Vassar. It’s how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn’t to ignore him but to destroy his agency entirely. He’s still going to alter his decision theory towards rape even if castrated.
However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.
Can we have LessWrong not be Reddit? Let’s not be Reddit. Too late, we’re already Reddit. Fuck.
You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.
-
Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.
Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don’t. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it “divine intervention.”
There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won’t rape people, but you won’t report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this “swordfighting at the edge of a cliff while shouting about our ideologies.” I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.
If you use the “shoot him” strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn’t cooperating with Omegarapist, it’s thinking to oneself “he’s too useful to actually follow precommitments about punishing” if he defects against you. This is fucking dumb. There’s a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn’t pretty, and it’s also a very accurate depiction of the real world landscape.
Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.
You absolutely have a reason to believe the article is worth reading.
If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.
I don’t think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.
However, that’s not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).
I think the policy I follow (although I hadn’t made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.
Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn’t a massive update in the end, but it also wasn’t negligible. I also haven’t downvoted the OP, and I believe I also haven’t downvoted any comments from jessicata. I’ve upvoted some.
Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.
So, this seems deliberate.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I’d love whether anyone who’s nearer can confirm/deny the rumor and fill in missing pieces.
As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).
As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don’t think anyone is to blame for his having had a mental break in the first place.
I now got some better sourced information from a friend who’s actually in good contact with Eric. Given that I’m also quite certain that there were no drugs involved and that isn’t a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I’m currently hoping that Eric will tell his side himself so that there’s less indirection about the information sourcing so I’m not saying more about the detail at this point in time.
Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.
It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.
My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect or the onset of psychosis.
My delusions prominently featured Vassar. At the time of my arrest, I had a notebook with multiple entries stating “Vassar is God” and “Vassar is the Devil.” This fixation partly stemmed from a conversation with Vassar, where he suggested that my “pattern must be erased from the world” in response to my defense of EA. However, it was primarily fueled by the indirect influence of someone from his group with whom I had more substantial contact.
This individual was deeply involved in a psychological engagement with me in the months leading to my psychotic episode. In my weakened state, I was encouraged to develop and interact with a mental model of her. She once described our interaction as “roleplaying an unfriendly AI,” which I perceived as markedly hostile. Despite the negative turn, I continued the engagement, hoping to influence her positively.
After joining Vassar’s group, I urged her to critically assess his intense psychological methods. She relayed a conversation with Vassar about “fixing” another individual, Anna (Salamon), to “see material reality” and “purge her green.” This exchange profoundly disturbed me, leading to a series of delusions and ultimately exacerbating my psychological instability, culminating in a psychotic state. This descent into madness continued for approximately 36 hours, ending with an attempted suicide and an assault on a mental health worker.
Additionally, it is worth mentioning that I visited Leverage on the same day. Despite exhibiting clear signs of delusion, I was advised to exercise caution with psychological endeavors. Ideally, further intervention, such as suggesting professional help or returning me to my friends, might have been beneficial. I was later informed that I was advised to return home, though my recollection of this is unclear due to my mental state at the time.
In the hotel that night, my mental state deteriorated significantly after I performed a mental action which I interpreted as granting my mental model of Vassar substantial influence over my thoughts, in an attempt to regain stability.
While there are many more intricate details to this story, I believe the above summary encapsulates the most critical elements relevant to our discussion.
I do not attribute direct blame to Vassar, as it is unlikely he either intended or could have reasonably anticipated these specific outcomes. However, his approach, characterized by high-impact psychological interventions, can inadvertently affect the mental health of those around him. I hope that he has recognized this potential for harm and exercises greater caution in the future.
Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
If I’m trying to put my finger on a real effect here, it’s related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more “social/business development/management” end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).
As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.
Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
2017 would be the year Eric’s episode happened as well. Did this result in multiple conversation about “Michael Vassar is God” that Eric might then picked up when he hang around the group?
I don’t know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn’t causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
I haven’t used the word god myself nor have heard it used by other people to refer to someone who’s insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
Can someone please clarify what is meant in this conext by ‘Vassar’s group’, or the term ‘Vassarites’ used by others?
My intution previously was that Michael Vassar had no formal ‘group’ or insitution of any kind, and it was just more like ‘a cluster of friends who hung out together a lot’, but this comment makes it seem like something more official.
While “Vassar’s group” is informal, it’s more than just a cluster of friends; it’s a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like “the AI safety community” or “wokeness” or “the startup scene” that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I’ve ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.
Median Group is the closest thing to a “Vassarite” institution, in that its listed members are 2⁄3 people who I’ve heard/read describing the strong influence Vassar has had on their thinking and 1⁄3 people I don’t know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn’t claim to speak for the whole scene or anything.
Michael and I are sometimes-housemates and I’ve never seen or heard of any formal “Vassarite” group or institution, though he’s an important connector in the local social graph, such that I met several good friends through him.
It sounds like you’re saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.
ETA: In case it wasn’t clear, “that” = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric’s account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.
Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It’s not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.
To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.
(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)
I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.
(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the “Vassarites”, or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)
If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer. I’m not going to make my comments longer without a specific idea of what’s unclear, that seems pointless.
It is accurate to state that I constructed a model of him based on limited information, which subsequently contributed to my dramatic psychological collapse. Nevertheless, the reason for developing this particular model can be attributed to his interactions with me and others. This was not due to any extraordinary or mystical abilities, but rather his profound commitment to challenging individuals’ perceptions of conventional reality and mastering the most effective methods to do so.
This approach is not inherently negative. However, it must be acknowledged that for certain individuals, such an intense disruption of their perceived reality can precipitate a descent into a detrimental psychological state.
Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn’t mean to; sorry about that.
The thing I meant to characterize as “crazy cult behavior” was people in the comments here attributing things like what you did in your mind to Michael Vassar’s spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.
This can be unpacked into an alternative to the charisma theory.
Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There’s sufficient excess demand that even if someone doesn’t issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.
I talked and corresponded with Michael a lot during 2017–2020, and it seems likely that one of the psychotic breaks people are referring to is mine from February 2017? (Which Michael had nothing to do with causing, by the way.) I don’t think you’re being fair.
I’m confident this is only a Ziz-ism: I don’t recall Michael using the term, and I just searched my emails for jailbreak, and there are no hits from him.
I’m having trouble figuring out how to respond to this hostile framing. I mean, it’s true that I’ve talked with Michael many times about ways in which (in his view, and separately in mine) MIRI, CfAR, and “the community” have failed to live up to their stated purposes. Separately, it’s also true that, on occasion, Michael has recommended I take drugs. (The specific recommendations I recall were weed and psilocybin. I always said No; drug use seems like a very bad idea given my history of psych problems.)
But, well … if you genuinely thought that institutions and a community that you had devoted a lot of your life to building up, were now failing to achieve their purposes, wouldn’t you want to talk to people about it? If you genuinely thought that certain chemicals would make your friends lives’ better, wouldn’t you recommend them?
Michael is a charismatic guy who has strong views and argues forcefully for them. That’s not the same thing as having mysterious mind powers to “make people paranoid” or cause psychotic breaks! (To the extent that there is a correlation between talking to Michael and having psych issues, I suspect a lot of it is a selection effect rather than causal: Michael told me once that he specifically seeks out people who are high in Eysenckian psychoticism.) If someone thinks Michael is wrong about something, great: I’m sure he’d be happy to argue about it, time permitting. But under-evidenced aspersions that someone is somehow dangerous just to talk to are not an argument.
I can’t speak for Michael or his friends, and I don’t want to derail the thread by going into the details of my own situation. (That’s a future community-drama post, for when I finally get over enough of my internalized silencing-barriers to finish writing it.) But speaking only for myself, I think there’s a nearby idea that actually makes sense: if a particular social scene is sufficiently crazy (e.g., it’s a cult), having a mental breakdown is an understandable reaction. It’s not that mental breakdowns are in any way good—in a saner world, that wouldn’t happen. But if you were so unfortunate to be in a situation where the only psychologically realistic outcomes were either to fall into conformity with the other cult-members, or have a stress-and-sleep-deprivation-induced psychotic episode as you undergo a “deep emotional break with the wisdom of [your] pack”, the mental breakdown might actually be less bad in the long run, even if it’s locally extremely bad.
I recommend hearing out the pitch and thinking it through for yourself. (But, yes, without drugs; I think drugs are very risky and strongly disagree with Michael on this point.)
(Incidentally, this was misreporting on my part, due to me being crazy at the time and attributing abilities to Michael that he did not, in fact, have. Michael did visit me in the psych ward, which was incredibly helpful—it seems likely that I would have been much worse off if he hadn’t come—but I was discharged normally; he didn’t bust me out.)
I don’t want to reveal any more specific private information than this without your consent, but let it be registered that I disagree with your assessment that your joining the Vassarites wasn’t harmful to you. I was not around for the 2017 issues (though if you reread our email exchanges from April you will understand why I’m suspicious), but when you had some more minor issues in 2019 I was more in the loop and I ended out emailing the Vassarites (deliberately excluding you from the email, a decision I will defend in private if you ask me) accusing them of making your situation worse and asking them to maybe lay off you until you were maybe feeling slightly better, and obviously they just responded with their “it’s correct to be freaking about learning your entire society is corrupt and gaslighting” shtick.
I more or less Outside View agree with you on this, which is why I don’t go around making call-out threads or demanding people ban Michael from the community or anything like that (I’m only talking about it now because I feel like it’s fair for the community to try to defend itself after Jessica attributed all of this to the wider community instead of Vassar specifically) “This guy makes people psychotic by talking to them” is a silly accusation to go around making, and I hate that I have to do it!
But also, I do kind of notice the skulls and they are really consistent, and I would feel bad if my desire not to say this ridiculous thing resulted in more people getting hurt.
I think the minimum viable narrative here is, as you say, something like “Michael is very good at spotting people right on the verge of psychosis, and then he suggests they take drugs.” Maybe a slightly more complicated narrative involves bringing them to a state of total epistemic doubt where they can’t trust any institutions or any of the people they formerly thought were their friends, although now this is getting back into the “he’s just having normal truth-seeking conversation” objection. He also seems really good at pushing trans people’s buttons in terms of their underlying anxiety around gender dysphoria (see the Ziz post) , so maybe that contributes somehow. I don’t know how it happens, I’m sufficiently embarrassed to be upset about something which looks like “having a nice interesting conversation” from the outside, and I don’t want to violate liberal norms that you’re allowed to have conversations—but I think those norms also make it okay to point out the very high rate at which those conversations end in mental breakdowns.
Maybe one analogy would be people with serial emotional abusive relationships—should we be okay with people dating Brent? Like yes, he had a string of horrible relationships that left the other person feeling violated and abused and self-hating and trapped. On the other, most of this, from the outside, looked like talking. He explained why it would be hurtful for the other person to leave the relationship or not do what he wanted, and he was convincing and forceful enough about it that it worked (I understand he also sometimes used violence, but I think the narrative still makes sense without it). Even so, the community tried to make sure people knew if they started a relationship with him they would get hurt, and eventually got really insistent about that. I do feel like this was a sort of boundary crossing of important liberal norms, but I think you’ve got to at least leave that possibility open for when things get really weird.
Thing 0:
Scott.
Before I actually make my point I want to wax poetic about reading SlateStarCodex.
In some post whose name I can’t remember, you mentioned how you discovered the idea of rationality. As a child, you would read a book with a position, be utterly convinced, then read a book with the opposite position and be utterly convinced again, thinking that the other position was absurd garbage. This cycle repeated until you realized, “Huh, I need to only be convinced by true things.”
This is extremely relatable to my lived experience. I am a stereotypical “high-functioning autist.” I am quite gullible, formerly extremely gullible. I maintain sanity by aggressively parsing the truth values of everything I hear. I am extremely literal. I like math.
To the degree that “rationality styles” are a desirable artifact of human hardware and software limitations, I find your style of thinking to be the most compelling.
Thus I am going to state that your way of thinking about Vassar has too many fucking skulls.
Thing 1:
Imagine two world models:
Some people want to act as perfect nth-order cooperating utilitarians, but can’t because of human limitations. They are extremely scrupulous, so they feel anguish and collapse emotionally. To prevent this, they rationalize and confabulate explanations for why their behavior actually is perfect. Then a moderately schizotypal man arrives and says: “Stop rationalizing.” Then the humans revert to the all-consuming anguish.
A collection of imperfect human moral actors who believe in utilitarianism act in an imperfect utilitarian way. An extremely charismatic man arrives who uses their scrupulosity to convince them they are not behaving morally, and then leverages their ensuing anguish to hijack their agency.
Which of these world models is correct? Both, obviously, because we’re all smart people here and understand the Machiavellian Intelligence Hypothesis.
Thing 2:
Imagine a being called Omegarapist. It has important ideas about decision theory and organizations. However, it has an uncontrollable urge to rape people. It is not a superintelligence; it merely an extremely charismatic human. (This is a refutation of the Brent Dill analogy. I do not know much about Brent Dill.)
You are a humble student of Decision Theory. What is the best way to deal with Omegarapist?
Ignore him. This is good for AI-box reasons, but bad because you don’t learn anything new about decision theory. Also, humans with strange mindstates are more likely to provide new insights, conditioned on them having insights to give (this condition excludes extreme psychosis).
Let Omegarapist out. This is a terrible strategy. He rapes everybody, AND his desire to rape people causes him to warp his explanations of decision theory.
Therefore we should use Strategy 1, right? No. This is motivated stopping. Here are some other strategies.
1a. Precommit to only talk with him if he castrates himself first.
1b. Precommit to call in the Scorched-Earth Dollar Auction Squad (California law enforcement) if he has sex with anybody involved in this precommitment then let him talk with anybody he wants.
I made those in 1 minute of actually trying.
Returning to the object level, let us consider Michael Vassar.
Strategy 1 corresponds to exiling him. Strategy 2 corresponds to a complete reputational blank-slate and free participation. In three minutes of actually trying, here are some alternate strategies.
1a. Vassar can participate but will be shunned if he talks about “drama” in the rationality community or its social structure.
1b. Vassar can participate but is not allowed to talk with one person at once, having to always be in a group of 3.
2a. Vassar can participate but has to give a detailed citation, or an extremely prominent low-level epistemic status mark, to every claim he makes about neurology or psychiatry.
I am not suggesting any of these strategies, or even endorsing the idea that they are possible. I am asking: WHY THE FUCK IS EVERYONE MOTIVATED STOPPING ON NOT LISTENING TO WHATEVER HE SAYS!!!
I am a contractualist and a classical liberal. However, I recognized the empirical fact that there are large cohorts of people who relate to language exclusively for the purpose of predation and resource expropriation. What is a virtuous man to do?
The answer relies on the nature of language. Fundamentally, the idea of a free marketplace of ideas doesn’t rely on language or its use; it relies on the asymmetry of a weapon. The asymmetry of a weapon is a mathematical fact about information processing. It exists in the territory. f you see an information source that is dangerous, build a better weapon.
You are using a powerful asymmetric weapon of Classical Liberalism called language. Vassar is the fucking Necronomicon. Instead of sealing it away, why don’t we make another weapon? This idea that some threats are temporarily too dangerous for our asymmetric weapons, and have to be fought with methods other than reciprocity, is the exact same epistemology-hole found in diversity-worship.
“Diversity of thought is good.”
“I have a diverse opinion on the merits of vaccination.”
“Diversity of thought is good, except on matters where diversity of thought leads to coercion or violence.”
“When does diversity of thought lead to coercion or violence?”
“When I, or the WHO, say so. Shut up, prole.”
This is actually quite a few skulls, but everything has quite a few skulls. People die very often.
Thing 3:
Now let me address a counterargument:
Argument 1: “Vassar’s belief system posits a near-infinitely powerful omnipresent adversary that is capable of ill-defined mind control. This is extremely conflict-theoretic, and predatory.”
Here’s the thing: rationality in general in similar. I will use that same anti-Vassar counterargument as a steelman for sneerclub.
Argument 2: “The beliefs of the rationality community posit complete distrust in nearly every source of information and global institution, giving them an excuse to act antisocially. It describes human behavior as almost entirely Machiavellian, allowing them to be conflict-theoretic, selfish, rationalizing, and utterly incapable of coordinating. They ‘logically deduce’ the relevant possibility of eternal suffering or happiness for the human species (FAI and s-risk), and use that to control people’s current behavior and coerce them into giving up their agency.”
There is a strategy that accepts both of these arguments. It is called epistemic learned helplessness. It is actually a very good strategy if you are a normie. Metis and the reactionary concept of “traditional living/wisdom” are related principles. I have met people with 100 IQ who I would consider highly wise, due to skill at this strategy (and not accidentally being born religious, which is its main weak point.)
There is a strategy that rejects both of these arguments. It is called Taking Ideas Seriously and using language literally. It is my personal favorite strategy, but I have no other options considering my neurotype. Very few people follow this strategy so it is hard to give examples, but I will leave a quote from an old Scott Aaronson paper that I find very inspiring. “In pondering these riddles, I don’t have any sort of special intuition, for which the actual arguments and counterarguments that I can articulate serve as window-dressing. The arguments exhaust my intuition.”
THERE IS NO EFFECTIVE LONG-TERM STRATEGY THAT REJECTS THE SECOND ARGUMENT BUT ACCEPTS THE FIRST! THIS IS WHERE ALL THE FUCKING SKULLS ARE! Why? Because it requires a complex notion of what arguments to accept, and the more complex the notion, the easier it will be to rationalize around, apply inconsistently, or Goodhart. See “A formalist manifesto” by Moldbug for another description of this. (This reminds me of how UDT/FDT/TDT agents behave better than causal agents at everything, but get counterfactually mugged, which seems absurd to us. If you try to come up with some notion of “legitimate information” or “self-locating information” to prevent an agent from getting mugged, it will similarly lose functionality in the non-anthropic cases. [See the Sleeping Beauty problem for a better explanation.])
The only real social epistemologies are of the form:
“Free speech, but (explicitly defined but also common-sensibly just definition of ideas that lead to violence).”
Mine is particular is, “Free speech but no (intentionally and directly inciting panic or violence using falsehoods).”
To put it a certain way, once you get on the Taking Ideas Seriously train, you cannot get off.
Thing 4:
Back when SSC existed, I got bored one day and looked at the blogroll. I discovered Hivewired. It was bad. Through Hivewired I discovered Ziz. I discovered the blackmail allegations while sick with a fever and withdrawing off an antidepressant. I had a mental breakdown, feeling utterly betrayed by the rationality community despite never even having talked to anyone in it. Then I rationalized it away. To be fair, this was reasonable considering the state in which I processed the information. However, the thought processes used to dismiss the worry were absolutely rationalizations. I can tell because I can smell them.
Fast forward a year. I am at a yeshiva to discover whether I want to be religious. I become an anti-theist and start reading rationality stuff again. I check out Ziz’s blog out of perverse curiosity. I go over the allegations again. I find a link to a more cogent, falsifiable, and specific case. I freak the fuck out. Then I get to work figuring how which parts are actually true.
MIRI payed out to blackmail. There’s an unironic Catholic working at CFAR and everyone thinks this is normal. He doesn’t actually believe in god, but he believes in belief, which is maybe worse. CFAR is a collection of rationality workshops, not a coordinated attempt to raise the sanity waterline (Anna told me this in a private communication, and this is public information as far as I know), but has not changed its marketing to match. Rationalists are incapable of coordinating, which is basically their entire job. All of these problems were foreseen by the Sequences, but no one has read the Sequences because most rationalists are an army of sci-fi midwits who read HPMOR then changed the beliefs they were wearing. (Example: Andrew Rowe. I’m sorry but it’s true, anyways please write Arcane Ascension book 4.)
I make contact with the actual rationality community for the first time. I trawl through blogs, screeds, and incoherent symbolist rants about morality written as a book review of The Northern Caves. Someone becomes convinced that I am a internet gangstalker who created an elaborate false identity of a 18-year-old gap year kid to make contact with them. Eventually I contact Benjamin Hoffman, who leads me to Vassar, who leads to the Vassarites.
He points out to be a bunch of things that were very demoralizing, and absolutely true. Most people act evil out of habituation and deviancy training, including my loved ones. Global totalitarianism is a relevant s-risk as societies become more and more hysterical due to a loss of existing wisdom traditions, and too low of a sanity waterline to replace them with actual thinking. (Mass surveillance also helps.)
I work on a project with him trying to create a micro-state within the government of New York City. During and after this project I am increasingly irritable and misanthropic. The project does not work. I effortlessly update on this, distance myself from him, then process the feeling of betrayal by the rationality community and inability to achieve immortality and a utopian society for a few months. I stop being a Vassarite. I continue to read blogs to stay updated on thinking well, and eventually I unlearn the new associated pain. I talk with the Vassarites as friends and associates now, but not as a club member.
What does this story imply? Michael Vassar induced mental damage in me, partly through the way he speaks and acts. However, as a primary effect of this, he taught me true things. With basic rationality skills, I avoid contracting the Vassar, then I healed the damage to my social life and behavior caused by this whole shitstorm (most of said damage was caused by non-Vassar factors).
Now I am significantly happier, more agentic, and more rational.
Thing 5
When I said what I did in Thing 1, I meant it. Vassar gets rid of identity-related rationalizations. Vassar drives people crazy. Vassar is very good at getting other people to see the moon in finger pointing at the moon problems and moving people out of local optimums into better local optimums. This requires the work of going downwards in the fitness landscape first. Vassar’s ideas are important and many are correct. It just happens to be that he might drive you insane. The same could be said of rationality. Reality is unfair; good epistemics isn’t supposed to be easy. Have you seen mathematical logic? (It’s my favorite field).
An example of an important idea that may come from Vassar, but is likely much older:
Control over a social hierarchy goes to a single person; this is a pluralist preference aggregation system. In those, the best strategy is to vote only in the two blocks who “matter.” Similarly, if you need to join and war and know you will be killed if your side loses, you should join the winning side. Thus humans are attracted to powerful groups of humans. This is a (grossly oversimplified) evolutionary origin of one type of conformity effect.
Power is the ability to make other human beings do what you want. There are fundamentally two strategies to get it: help other people so that they want you to have power, or hurt other people to credibly signal that you already have power. (Note the correspondence of these two to dominance and prestige hierarchies). Credibly signaling that you have a lot of power is almost enough to get more power.
However, if you have people harming themselves to signal your power, if they admit publicly that they are harming themselves, they can coordinate with neutral parties to move the Schelling point and establish a new regime. Thus there are two obvious strategies to achieving ultimate power: help people get what they want (extremely difficult), make people publicly harm themselves while shouting how great they feel (much easier). The famous bad equilibrium of 8 hours of shocking oneself per day is an obvious example.
Benjamin Ross Hoffman’s blog is very good, but awkwardly organized. He conveys explicit, literal models of these phenomena that are very useful and do not carry the risk of filling your head with whispers from the beyond. However, they have less impact because of it.
Thing 6:
I’m almost done with this mad effortpost. I want to note one more thing. Mistake theory works better than conflict theory. THIS IS NOT NORMATIVE.
Facts about the map-territory distinction and facts about the behaviors of mapmaking algorithms are facts about the territory. We can imagine a very strange world where conflict theory is a more effective way to think. One of the key assumptions of conflict theorists is that complexity or attempts to undermine moral certainty are usually mind control. Another key assumption is that there are entrenched power groups, or individual malign agents, will use these things to hack you.
These conditions are neither necessary nor sufficient for conflict theory to be better than mistake theory. I have an ancient and powerful technique called “actually listening to arguments.” When I’m debating with someone who I know to use bad faith, I decrypt everything they say into logical arguments. Then I use those logical arguments to modify my world model. One might say adversaries can used biased selection and rationalization to make you less logical despite this strategy. I say, on an incurable hardware and wetware level, you are already doing this. (For example, any Bayesian agent of finite storage space is subject to the Halo Effect, as you described in a post once.) Having someone do it in a different direction can helpfully knock you out of your models and back into reality, even if their models are bad. This is why it is still worth decrypting the actual information content of people you suspect to be in bad faith.
Uh, thanks for reading, I hope this was coherent, have a nice day.
I enjoyed reading this. Thanks for writing it.
One note though: I think this post (along with most of the comments) isn’t treating Vassar as a fully real person with real choices. It (also) treats him like some kind of ‘force in the world’ or ‘immovable object’. And I really want people to see him as a person who can change his mind and behavior and that it might be worth asking him to take more responsibility for his behavior and its moral impacts. I’m glad you yourself were able to “With basic rationality skills, avoid contracting the Vassar, then [heal] the damage to [your] social life.”
But I am worried about people treating him like a force of nature that you make contact with and then just have to deal with whatever the effects of that are.
I think it’s pretty immoral to de-stabilize people to the point of maybe-insanity, and I think he should try to avoid it, to whatever extent that’s in his capacity, which I think is a lot.
I might think this was a worthwhile tradeoff if I actually believed the ‘maybe insane’ part was unavoidable, and I do not believe it is. I know that with more mental training, people can absorb more difficult truths without risk of damage. Maybe Vassar doesn’t want to offer this mental training himself; that isn’t much of an excuse, in my book, to target people who are ‘close to the edge’ (where ‘edge’ might be near a better local optimum) but who lack solid social support, rationality skills, mental training, or spiritual groundedness and then push them.
His service is well-intentioned, but he’s not doing it wisely and compassionately, as far as I can tell.
I think that treating Michael Vassar as an unchangeable force of nature is the right way to go—for the purposes of discussions precisely like this one. Why? Because even if Michael himself can (and chooses to) alter his behavior in some way (regardless of whether this is good or bad or indifferent), nevertheless there will be other Michael Vassars out there—and the question remains, of how one is to deal with arbitrary Michael Vassars one encounters in life.
In other words, what we’ve got here is a vulnerability (in the security sense of the word). One day you find that you’re being exploited by a clever hacker (we decline to specify whether he is a black hat or white hat or what). The one comes to you and recommends a patch. But you say—why should we treat this specific attack as some sort of unchangeable force of nature? Rather we should contact this hacker and persuade him to cease and desist. But the vulnerability is still there…
I think you can either have a discussion that focuses on an individual and if you do it makes sense to model them with agency or you can have more general threat models.
If you however mix the two you are likely to get confused in both directions. You will project ideas from your threat model into the person and you will take random aspects of the individual into your threat model that aren’t typical for the threat.
I am not sure how much ‘not destabilize people’ is an option that is available to Vassar.
My model of Vassar is as a person who is constantly making associations, and using them to point at the moon. However, pointing at the moon can convince people of nonexistent satellites and thus drive people crazy. This is why we have debates instead of koan contests.
Pointing at the moon is useful when there is inferential distance; we use it all the time when talking with people without rationality training. Eliezer used it, and a lot of “you are expected to behave better for status reasons look at my smug language”-style theist-bashing, in the Sequences. This was actually highly effective, although it had terrible side effects.
I think that if Vassar tried not to destabilize people, it would heavily impede his general communication. He just talks like this. One might say, “Vassar, just only say things that you think will have a positive effect on the person.” 1. He already does that. 2. That is advocating that Vassar manipulate people. See Valencia in Worth the Candle.
In the pathological case of Vassar, I think the naive strategy of “just say the thing you think is true” is still correct.
Mental training absolutely helps. I would say that, considering that the people who talk with Vassar are literally from a movement called rationality, it is a normatively reasonable move to expect them to be mentally resilient. Factually, this is not the case. The “maybe insane” part is definitely not unavoidable, but right now I think the problem is with the people talking to Vassar, and not he himself.
I’m glad you enjoyed the post.
My suggestion for Vassar is not to ‘try not to destabilize people’ exactly.
It’s to very carefully examine his speech and its impacts, by looking at the evidence available (asking people he’s interacted with about what it’s like to listen to him) and also learning how to be open to real-time feedback (like, actually look at the person you’re speaking to as though they’re a full, real human—not a pair of ears to be talked into or a mind to insert things into). When he talks theory, I often get the sense he is talking “at” rather than talking “to” or “with”. The listener practically disappears or is reduced to a question-generating machine that gets him to keep saying things.
I expect this process could take a long time / run into issues along the way, and so I don’t think it should be rushed. Not expecting a quick change. But claiming there’s no available option seems wildly wrong to me. People aren’t fixed points and generally shouldn’t be treated as such.
This is actually very fair. I think he does kind of insert information into people.
I never really felt like a question-generating machine, more like a pupil at the foot of a teacher who is trying to integrate the teacher’s information.
I think the passive, reactive approach you mention is actually a really good idea of how to be more evidential in personal interaction without being explicitly manipulative.
Thanks!
I think I interacted with Vassar four times in person, so I might get some things wrong here, but I think that he’s pretty disassociated from his body which closes a normal channel of perceiving impacts on the person he’s speaking with. This thing looks to me like some bodily process generating stress / pain and being a cause for disassociation. It might need a body worker to fix whatever goes on there to create the conditions for perceiving the other person better.
Beyond that Circling might be an enviroment in which one can learn to interact with others as humans who have their own feelings but that would require opening up to the Circling frame.
You are making a false dichomaty here. You are assuming that everything that has a negative effect on a person is manipulation.
As Vassar himself sees the situation people believe a lot of lies for reasons of fitting in socially in society. From that perspective getting people to stop believing in those lies will make it harder to fit socially into society.
If you would get a Nazi guard at Ausschwitz into a state where the moral issue of their job can’t be disassociated anymore, that’s very predicably going to have a negative effect on that prison guard.
Vassar position would be that it would be immoral to avoid talking about the truth about the nature of their job when talking with the guard in a motivation to make life easier for the guard.
I think this line of discussion would be well served by marking a natural boundary in the cluster “crazy.” Instead of saying “Vassar can drive people crazy” I’d rather taboo “crazy” and say:
Personally I care much more, maybe lexically more, about the upside of minds learning about their situation, than the downside of mimics going into maladaptive death spirals, though it would definitely be better all round if we can manage to cause fewer cases of the latter without compromising the former, much like it’s desirable to avoid torturing animals, and it would be desirable for city lights not to interfere with sea turtles’ reproductive cycle by resembling the moon too much.
My problem with this comment is it takes people who:
can’t verbally reason without talking things through (and are currently stuck in a passive role in a conversation)
and who:
respond to a failure of their verbal reasoning
under circumstances of importance (in this case moral importance)
and conditions of stress, induced by
trying to concentrate while in a passive role
failing to concentrate under conditions of high moral importance
by simply doing as they are told—and it assumes they are incapable of reasoning under any circumstances.
It also then denies people who are incapable of independent reasoning the right to be protected from harm.
EDIT: Ben is correct to say we should taboo “crazy.”
This is a very uncharitable interpretation (entirely wrong). The highly scrupulous people here can undergo genuine psychological collapse if they learn their actions aren’t as positive utility as they thought. (entirely wrong)
I also don’t think people interpret Vassar’s words as a strategy and implement incoherence. Personally, I interpreted Vassar’s words as factual claims then tried to implement a strategy on them. When I was surprised by reality a bunch, I updated away. I think the other people just no longer have a coalitional strategy installed and don’t know how to function without one. This is what happened to me and why I repeatedly lashed out at others when I perceived them as betraying me, since I no longer automatically perceived them as on my side. I rebuilt my rapport with those people and now have more honest relationships with them. (still endorsed)
Beyond this, I think your model is accurate.
“That which can be destroyed by the truth should be”—I seem to recall reading that somewhere.
And: “If my actions aren’t as positive utility as I think, then I desire to believe that my actions aren’t as positive utility as I think”.
If one has such a mental makeup that finding out that one’s actions have worse effects than one imagined causes genuine psychological collapse, then perhaps the first order of business is to do everything in one’s power to fix that (really quite severe and glaring) bug in one’s psyche—and only then to attempt any substantive projects in the service of world-saving, people-helping, or otherwise doing anything really consequential.
Thank you for echoing common sense!
What is psychological collapse?
For those who can afford it, taking it easy for a while is a rational response to noticing deep confusion, continuing to take actions based on a discredited model would be less appealing, and people often become depressed when they keep confusedly trying to do things that they don’t want to do.
Are you trying to point to something else?
What specific claims turned out to be false? What counterevidence did you encounter?
Specific claim: the only nontrivial obstacle in front of us is not being evil
This is false. Object-level stuff is actually very hard.
Specific claim: nearly everyone in the aristocracy is agentically evil. (EDIT: THIS WAS NOT SAID. WE BASICALLY AGREE ON THIS SUBJECT.)
This is a wrong abstraction. Frame of Puppets seems naively correct to me, and has become increasingly reified by personal experience of more distant-to-my-group groups of people, to use a certain person’s language. Ideas and institutions have the agency; they wear people like skin.
Specific claim: this is how to take over New York.
Didn’t work.
I think this needs to be broken up into 2 claims:
1 If we execute strategy X, we’ll take over New York. 2 We can use straightforward persuasion (e.g. appeals to reason, profit motive) to get an adequate set of people to implement strategy X.
2 has been falsified decisively. The plan to recruit candidates via appealing to people’s explicit incentives failed, there wasn’t a good alternative, and as a result there wasn’t a chance to test other parts of the plan (1).
That’s important info and worth learning from in a principled way. Definitely I won’t try that sort of thing again in the same way, and it seems like I should increase my credence both that plans requiring people to respond to economic incentives by taking initiative to play against type will fail, and that I personally might be able to profit a lot by taking initiative to play against type, or investing in people who seem like they’re already doing this, as long as I don’t have to count on other unknown people acting similarly in the future.
But I find the tendency to respond to novel multi-step plans that would require someone do take initiative by sitting back and waiting for the plan to fail, and then saying, “see? novel multi-step plans don’t work!” extremely annoying. I’ve been on both sides of that kind of transaction, but if we want anything to work out well we have to distinguish cases of “we / someone else decided not to try” as a different kind of failure from “we tried and it didn’t work out.”
This is actually completely fair. So is the other comment.
This seems to be conflating the question of “is it possible to construct a difficult problem?” with the question of “what’s the rate-limiting problem?”. If you have a specific model for how to make things much better for many people by solving a hard technical problem before making substantial progress on human alignment, I’d very much like to hear the details. If I’m persuaded I’ll be interested in figuring out how to help.
So far this seems like evidence to the contrary, though, as it doesn’t look like you thought you could get help making things better for many people by explaining the opportunity.
To the extent I’m worried about Vassar’s character, I am as equally worried about the people around him. It’s the people around him who should also take responsibility for his well-being and his moral behavior. That’s what friends are for. I’m not putting this all on him. To be clear.
I think it’s a fine way of think about mathematical logic, but if you try to think this way about reality, you’ll end up with views that make internal sense and are self-reinforcing but don’t follow the grain of facts at all. When you hear such views from someone else, it’s a good idea to see which facts they give in support. Do their facts seem scant, cherrypicked, questionable when checked? Then their big claims are probably wrong.
The people who actually know their stuff usually come off very different. Their statements are carefully delineated: “this thing about power was true in 10th century Byzantium, but not clear how much of it applies today”.
Also, just to comment on this:
I think it’s somewhat changeable. Even for people like us, there are ways to make our processing more “fuzzy”. Deliberately dimming some things, rounding others. That has many benefits: on the intellectual level you learn to see many aspects of a problem instead of hyperfocusing on one; emotionally you get more peaceful when thinking about things; and interpersonally, the world is full of small spontaneous exchanges happening on the “warm fuzzy” level, it’s not nearly so cold a place as it seems, and plugging into that market is so worth it.
On the third paragraph:
I rarely have problems with hyperfixation. When I do, I just come back to the problem later, or prime myself with a random stimulus. (See Steelmanning Divination.)
Peacefulness is enjoyable and terminally desirable, but in many contexts predators want to induce peacefulness to create vulnerability. Example: buying someone a drink with ill intent. (See “Safety in numbers” by Benjamin Ross Hoffman. I actually like relaxation, but agree with him that feeling relaxed in unsafe environments is a terrible idea. Reality is mostly an unsafe environment. Am getting to that.)
I have no problem enjoying warm fuzzies. I had problems with them after first talking with Vassar, but I re-equilibrated. Warm fuzzies are good, helpful, and worth purchasing. I am not a perfect utilitarian. However, it is important that when you buy fuzzies instead of utils, as Scott would put it, you know what you are buying. Many will sell fuzzies and market them as utils.
I sometimes round things, it is not inherently bad.
Dimming things is not good. I like being alive. From a functionalist perspective, the degree to which I am aroused (with respect to the senses and the mind) is the degree to which I am a real, sapient being. Dimming is sometimes terminally valuable as relaxation, and instrumentally valuable as sleep, but if you believe in Life, Freedom, Prosperity And Other Nice Transhumanist Things then dimming being bad in most contexts follows as a natural consequence.
On the second paragraph:
This is because people compartmentalize. After studying a thing for a long time, people will grasp deep nonverbal truths about that thing. Sometimes they are wrong; without the legibility of the elucidation, false ideas such gained are difficult to destroy. Sometimes they are right! Mathematical folklore is an example: it is literally metis among mathematicians.
Highly knowledgeable and epistemically skilled people delineate. Sometimes the natural delineation is “this is true everywhere and false nowhere.” See “The Proper Use of Humility,” and for an example of how delineations often should be large, “Universal Fire.”
On the first paragraph:
Reality is hostile through neutrality. Any optimizing agent naturally optimizes against most other optimization targets when resources are finite. Lifeforms are (badly) optimized for inclusive genetic fitness. Thermodynamics looks like the sort of Universal Law that an evil god would construct. According to a quick Google search approximately 3,700 people die in car accidents per day and people think this is completely normal.
Many things are actually effective. For example, most places in the United States have drinkable-ish running water. This is objectively impressive. Any model must not be entirely made out of “the world is evil” otherwise it runs against facts. But the natural mental motion you make, as a default, should be, “How is this system produced by an aggressively neutral, entirely mechanistic reality?”
See the entire Sequence on evolution, as well as Beyond the Reach of God.
I mostly see where you’re coming from, but I think the reasonable answer to “point 1 or 2 is a false dichotomy” is this classic, uh, tumblr quote (from memory):
“People cannot just. At no time in the history of the human species has any person or group ever just. If your plan relies on people to just, then your plan will fail.”
This goes especially if the thing that comes after “just” is “just precommit.”
My expectation is that interaction with Vassar is that the people who espouse 1 or 2 expect that the people interacting are incapable of precommitting to the required strength. I don’t know if they’re correct, but I’d expect them to be, because I think people are just really bad at precommitting in general. If precommitting was easy, I think we’d all be a lot more fit and get a lot more done. Also, Beeminder would be bankrupt.
This is a very good criticism! I think you are right about people not being able to “just.”
My original point with those strategies was to illustrate an instance of motivated stopping about people in the community who have negative psychological effects, or criticize popular institutions. Perhaps it is the case that people genuinely tried to make a strategy but automatically rejected my toy strategies as false. I do not think it is, based on “vibe” and on the arguments that people are making, such as “argument from cult.”
I think you are actually completely correct about those strategies being bad. Instead, I failed to point out that I expect a certain level of mental robustness-to-nonsanity from people literally called “rationalists.” This comes off as sarcastic but I mean it completely literally.
Precommitting isn’t easy, but rationality is about solving hard problems. When I think of actual rationality, I think of practices such as “five minutes of actually trying” and alkjash’s “Hammertime.” Humans have a small component of behavior that is agentic, and a huge component of behavior that is non-agentic and installed by vaguely agentic processes (simple conditioning, mimicry, social learning.) Many problems are solved immediately and almost effortlessly by just giving the reins to the small part.
Relatedly, to address one of your examples, I expect at least one of the following things to be true about any given competent rationalist.
They have a physiological problem.
They don’t believe becoming fit to be worth their time, and have a good reason to go against the naive first-order model of “exercise increases energy and happiness set point.”
They are fit.
Hypocritically, I fail all three of these criterion. I take full blame for this failure and plan on ameliorating it. (You don’t have to take Heroic Responsibility for the world, but you have to take it about yourself.)
A trope-y way of thinking about it is: “We’re supposed to be the good guys!” Good guys don’t have to be heroes, but they have to be at least somewhat competent, and they have to, as a strong default, treat potential enemies like their equals.
I found many things you shared useful. I also expect that because of your style/tone you’ll get down voted :(
It’s not just Vassar. It’s how the whole community has excused and rationalized away abuse. I think the correct answer to the omega rapist problem isn’t to ignore him but to destroy his agency entirely. He’s still going to alter his decision theory towards rape even if castrated.
I think you are entirely wrong.
However, I gave you a double-upvote because you did nothing normatively wrong. The fact that you are being mass-downvoted just because you linked to that article and because you seem to be associated with Ziz (because of the gibberish name and specific conception of decision theory) is extremely disturbing.
Can we have LessWrong not be Reddit? Let’s not be Reddit. Too late, we’re already Reddit. Fuck.
You are right that, unless people can honor precommitments perfectly and castration is irreversible even with transhuman technology, Omegarapist will still alter his decision theory. Despite this, there are probably better solutions than killing or disabling him. I say this not out of moral ickiness, but out of practicality.
-
Imagine both you are Omegarapist are actual superintelligences. Then you can just make a utility function-merge to avoid the inefficiency of conflict, and move on with your day.
Humans have an similar form of this. Humans, even when sufficiently distinct in moral or factual position as to want to kill each other, often don’t. This is partly because of an implicit assumption that their side, the correct side, will win in the end, and that this is less true if they break the symmetry and use weapons. Scott uses the example of a pro-life and pro-choice person having dinner together, and calls it “divine intervention.”
There is an equivalent of this with Omegarapist. Make some sort of pact and honor it: he won’t rape people, but you won’t report his previous rapes to the Scorched Earth Dollar Auction squad. Work together on decision theory the project is complete. Then agree either to utility-merge with him in the consequent utility function, or just shoot him. I call this “swordfighting at the edge of a cliff while shouting about our ideologies.” I would be willing to work with Moldbug on Strong AI, but if we had to input the utility function, the person who would win would be determined by a cinematic swordfight. In a similar case with my friend Sudo Nim, we could just merge utilities.
If you use the “shoot him” strategy, Omegarapist is still dead. You just got useful work out of him first. If he rapes people, just call in the Dollar Auction squad. The problem here isn’t cooperating with Omegarapist, it’s thinking to oneself “he’s too useful to actually follow precommitments about punishing” if he defects against you. This is fucking dumb. There’s a great webnovel called Reverend Insanity which depicts what organizations look like when everyone uses pure CDT like this. It isn’t pretty, and it’s also a very accurate depiction of the real world landscape.
Oh come on. The post was downvoted because it was inflammatory and low quality. It made a sweeping assertion while providing no evidence except a link to an article that I have no reason to believe is worth reading. There is a mountain of evidence that being negative is not a sufficient cause for being downvoted on LW, e.g. the OP.
(FYI, the OP has 154 votes and 59 karma, so it is both heavily upvoted and heavily downvoted.)
You absolutely have a reason to believe the article is worth reading.
If you live coordinated with an institution, spending 5 minutes of actually trying (every few months) to see if that institution is corrupt is a worthy use of time.
I read the linked article, and my conclusion is that it’s not even in the neighborhood of “worth reading”.
I don’t think I live coordinated with CFAR or MIRI, but it is true that, if they are corrupt, this is something I would like to know.
However, that’s not sufficient reason to think the article is worth reading. There are many articles making claims that, if true, I would very much like to know (e.g. someone arguing that the Christian Hell exists).
I think the policy I follow (although I hadn’t made it explicit until now) is to ignore claims like this by default but listen up as soon as I have some reason to believe that the source is credible.
Which incidentally was the case for the OP. I have spent a lot more than 5 minutes reading it & replies, and I have, in fact, updated on my view of CRAF and Miri. It wasn’t a massive update in the end, but it also wasn’t negligible. I also haven’t downvoted the OP, and I believe I also haven’t downvoted any comments from jessicata. I’ve upvoted some.
This is fair, actually.
...and then pushing them.
So, this seems deliberate. [EDIT: Or not. Zack makes a fair point.] He is not even hiding it, if you listen carefully.
Because high-psychoticism people are the ones who are most likely to understand what he has to say.
This isn’t nefarious. Anyone trying to meet new people to talk to, for any reason, is going to preferentially seek out people who are a better rather than worse match. Someone who didn’t like our robot cult could make structurally the same argument about, say, efforts to market Yudkowsky’s writing (like spending $28,000 distributing copies of Harry Potter and the Methods to math contest winners): why, they’re preying on innocent high-IQ systematizers and filling their heads with scary stories about the coming robot apocalypse!
I mean, technically, yes. But in Yudkowsky and friends’ worldview, the coming robot apocalypse is actually real, and high-IQ systematizers are the people best positioned to understand this important threat. Of course they’re going to try to market their memes to that neurotype-demographic. What do you expect them to do? What do you expect Michael to do?
There’s a sliding scale ranging from seeking out people who are better at understanding arguments in general to seeking out people who are biased toward agreeing with a specific set of arguments (and perhaps made better at understanding those arguments by that bias). Targeting math contest winners seems more toward the former end of the scale than targeting high-psychoticism people. This is something that seems to me to be true independently of the correctness of the underlying arguments. You don’t have to already agree about the robot apocalypse to be able to see why math contest winners would be better able to understand arguments for or against the robot apocalypse.
If Yudkowsky and friends were deliberately targeting arguments for short AI timelines at people who already had a sense of a foreshortened future, then that would be more toward the latter end of the scale, and I think you’d object to that targeting strategy even though they’d be able to make an argument structurally the same as your comment.
Yudkowsky and friends are targeting arguments that AGI is important at people already likely to believe AGI is important (and who are open to thinking it’s even more important than they think), e.g. programmers, transhumanists, and reductionists. The case is less clear for short timelines specifically, given the lack of public argumentation by Yudkowsky etc, but the other people I know who have tried to convince people about short timelines (e.g. at the Asilomar Beneficial AI conference) were targeting people likely to be somewhat convinced of this, e.g. people who think machine learning / deep learning are important.
In general this seems really expected and unobjectionable? “If I’m trying to convince people of X, I’m going to find people who already believe a lot of the pre-requisites for understanding X and who might already assign X a non-negligible prior”. This is how pretty much all systems of ideas spread, I have trouble thinking of a counterexample.
I mean, do a significant number of people not select who they talk with based on who already agrees with them to some extent and is paying attention to similar things?
If short timelines advocates were seeking out people with personalities that predisposed them toward apocalyptic terror, would you find it similarly unobjectionable? My guess is no. It seems to me that a neutral observer who didn’t care about any of the object-level arguments would say that seeking out high-psychoticism people is more analogous to seeking out high-apocalypticism people than it is to seeking out programmers, transhumanists, reductionists, or people who think machine learning / deep learning are important.
The way I can make sense of seeking high-psychoticism people being morally equivalent to seeking high IQ systematizers, is if I drain any normative valance from “psychotic,” and imagine there is a spectrum from autistic to psychotic. In this spectrum the extreme autistic is exclusively focused on exactly one thing at a time, and is incapable of cognition that has to take into account context, especially context they aren’t already primed to have in mind, and the extreme psychotic can only see the globally interconnected context where everything means/is connected to everything else. Obviously neither extreme state is desirable, but leaning one way or another could be very helpful in different contexts.
See also: indexicality.
On the other hand, back in my reflective beliefs, I think psychosis is a much scarier failure mode than “autism,” on this scale, and I would not personally pursue any actions that pushed people toward it without, among other things, a supporting infrastructure of some kind for processing the psychotic state without losing the plot (social or cultural would work, but whatever).
I wouldn’t find it objectionable. I’m not really sure what morally relevant distinction is being pointed at here, apocalyptic beliefs might make the inferential distance to specific apocalyptic hypotheses lower.
Well, I don’t think it’s obviously objectionable, and I’d have trouble putting my finger on the exact criterion for objectionability we should be using here. Something like “we’d all be better off in the presence of a norm against encouraging people to think in ways that might be valid in the particular case where we’re talking to them but whose appeal comes from emotional predispositions that we sought out in them that aren’t generally either truth-tracking or good for them” seems plausible to me. But I think it’s obviously not as obviously unobjectionable as Zack seemed to be suggesting in his last few sentences, which was what moved me to comment.
I don’t have well-formed thoughts on this topic, but one factor that seems relevant to me has a core that might be verbalized as “susceptibility to invalid methods of persuasion”, which seems notably higher in the case of people with high “apocalypticism” than people with the other attributes described in the grandparent. (A similar argument applies in the case of people with high “psychoticism”.)
That might be relevant in some cases but seems unobjectionable both in the psychoticism case and the apocalypse case. I would predict that LW people cluster together in personality measurements like OCEAN and Eysenck, it’s by default easier to write for people of a similar personality to yourself. Also, people notice high rates of Asperger’s-like characteristics around here, which are correlated with Jewish ethnicity and transgenderism (also both frequent around here).
It might not be nefarious.
But it might also not be very wise.
I question Vassar’s wisdom, if what you say is indeed true about his motives.
I question whether he’s got the appropriate feedback loops in place to ensure he is not exacerbating harms. I question whether he’s appropriately seeking that feedback rather than turning away from the kinds he finds overwhelming, distasteful, unpleasant, or doesn’t know how to integrate.
I question how much work he’s done on his own shadow and whether it’s not inadvertently acting out in ways that are harmful. I question whether he has good friends he trusts who would let him know, bluntly, when he is out of line with integrity and ethics or if he has ‘shadow stuff’ that he’s not seeing.
I don’t think this needs to be hashed out in public, but I hope people are working closer to him on these things who have the wisdom and integrity to do the right thing.
Rumor has it that https://www.sfgate.com/news/bayarea/article/Man-Gets-5-Years-For-Attacking-Woman-Outside-13796663.php is due to Vassar recommended drugs. In the OP that case does get blamed on CFAR’s enviroment without any mentioning of that part.
When talking about whether or not CFAR is responsible for that stories factors like that seem to me to matter quite a bit. I’d love whether anyone who’s nearer can confirm/deny the rumor and fill in missing pieces.
As I mentioned elsewhere, I was heavily involved in that incident for a couple months after it happened and I looked for causes that could help with the defense. AFAICT No drugs were taken in the days leading up to the mental health episode or arrest (or people who took drugs with him lied about it).
I, too, asked people questions after that incident and failed to locate any evidence of drugs.
As I heard this story, Eric was actively seeking mental health care on the day of the incident, and should have been committed before it happened, but several people (both inside and outside the community) screwed up. I don’t think anyone is to blame for his having had a mental break in the first place.
I now got some better sourced information from a friend who’s actually in good contact with Eric. Given that I’m also quite certain that there were no drugs involved and that isn’t a case of any one person being mainly responsible for it happening but multiple people making bad decisions. I’m currently hoping that Eric will tell his side himself so that there’s less indirection about the information sourcing so I’m not saying more about the detail at this point in time.
Edit: The following account is a component of a broader and more complex narrative. While it played a significant role, it must be noted that there were numerous additional challenges concurrently affecting my life. Absent these complicating factors, the issues delineated in this post alone may not have precipitated such severe consequences. Additionally, I have made minor revisions to the third-to-last bullet point for clarity.
It is pertinent to provide some context to parts of my story that are relevant to the ongoing discussions.
My psychotic episode was triggered by a confluence of factors, including acute physical and mental stress, as well as exposure to a range of potent memes. I have composed a detailed document on this subject, which I have shared privately with select individuals. I am willing to share this document with others who were directly involved or have a legitimate interest. However, a comprehensive discussion of these details is beyond the ambit of this post, which primarily focuses on the aspects related to my experiences at Vassar.
During my psychotic break, I believed that someone associated with Vassar had administered LSD to me. Although I no longer hold this belief, I cannot entirely dismiss it. Nonetheless, given my deteriorated physical and mental health at the time, the vividness of my experiences could be attributed to a placebo effect or the onset of psychosis.
My delusions prominently featured Vassar. At the time of my arrest, I had a notebook with multiple entries stating “Vassar is God” and “Vassar is the Devil.” This fixation partly stemmed from a conversation with Vassar, where he suggested that my “pattern must be erased from the world” in response to my defense of EA. However, it was primarily fueled by the indirect influence of someone from his group with whom I had more substantial contact.
This individual was deeply involved in a psychological engagement with me in the months leading to my psychotic episode. In my weakened state, I was encouraged to develop and interact with a mental model of her. She once described our interaction as “roleplaying an unfriendly AI,” which I perceived as markedly hostile. Despite the negative turn, I continued the engagement, hoping to influence her positively.
After joining Vassar’s group, I urged her to critically assess his intense psychological methods. She relayed a conversation with Vassar about “fixing” another individual, Anna (Salamon), to “see material reality” and “purge her green.” This exchange profoundly disturbed me, leading to a series of delusions and ultimately exacerbating my psychological instability, culminating in a psychotic state. This descent into madness continued for approximately 36 hours, ending with an attempted suicide and an assault on a mental health worker.
Additionally, it is worth mentioning that I visited Leverage on the same day. Despite exhibiting clear signs of delusion, I was advised to exercise caution with psychological endeavors. Ideally, further intervention, such as suggesting professional help or returning me to my friends, might have been beneficial. I was later informed that I was advised to return home, though my recollection of this is unclear due to my mental state at the time.
In the hotel that night, my mental state deteriorated significantly after I performed a mental action which I interpreted as granting my mental model of Vassar substantial influence over my thoughts, in an attempt to regain stability.
While there are many more intricate details to this story, I believe the above summary encapsulates the most critical elements relevant to our discussion.
I do not attribute direct blame to Vassar, as it is unlikely he either intended or could have reasonably anticipated these specific outcomes. However, his approach, characterized by high-impact psychological interventions, can inadvertently affect the mental health of those around him. I hope that he has recognized this potential for harm and exercises greater caution in the future.
Thank you for sharing such personal details for the sake of the conversation.
Thanks for sharing the details of your experience. Fyi I had a trip earlier in 2017 where I had the thought “Michael Vassar is God” and told a couple people about this, it was overall a good trip, not causing paranoia afterwards etc.
If I’m trying to put my finger on a real effect here, it’s related to how Michael Vassar was one of the initial people who set up the social scene (e.g. running singularity summits and being executive director of SIAI), being on the more “social/business development/management” end relative to someone like Eliezer; so if you live in the scene, which can be seen as a simulacrum, the people most involved in setting up the scene/simulacrum have the most aptitude at affecting memes related to it, like a world-simulator programmer has more aptitude at affecting the simulation than people within the simulation (though to a much lesser degree of course).
As a related example, Von Neumann was involved in setting up post-WWII US Modernism, and is also attributed extreme mental powers by modernism (e.g. extreme creativity in inventing a wide variety of fields); in creating the social system, he also has more memetic influence within that system, and could more effectively change its boundaries e.g. in creating new fields of study.
2017 would be the year Eric’s episode happened as well. Did this result in multiple conversation about “Michael Vassar is God” that Eric might then picked up when he hang around the group?
I don’t know, some of the people were in common between these discussions so maybe, but my guess would be that it wasn’t causal, only correlational. Multiple people at the time were considering Michael Vassar to be especially insightful and worth learning from.
I haven’t used the word god myself nor have heard it used by other people to refer to someone who’s insightful and worth learning from. Traditionally, people learn from prophets and not from gods.
Can someone please clarify what is meant in this conext by ‘Vassar’s group’, or the term ‘Vassarites’ used by others?
My intution previously was that Michael Vassar had no formal ‘group’ or insitution of any kind, and it was just more like ‘a cluster of friends who hung out together a lot’, but this comment makes it seem like something more official.
While “Vassar’s group” is informal, it’s more than just a cluster of friends; it’s a social scene with lots of shared concepts, terminology, and outlook (although of course not every member holds every view and members sometimes disagree about the concepts, etc etc). In this way, the structure is similar to social scenes like “the AI safety community” or “wokeness” or “the startup scene” that coordinate in part on the basis of shared ideology even in the absence of institutional coordination, albeit much smaller. There is no formal institution governing the scene, and as far as I’ve ever heard Vassar himself has no particular authority within it beyond individual persuasion and his reputation.
Median Group is the closest thing to a “Vassarite” institution, in that its listed members are 2⁄3 people who I’ve heard/read describing the strong influence Vassar has had on their thinking and 1⁄3 people I don’t know, but AFAIK Median Group is just a project put together by a bunch of friends with similar outlook and doesn’t claim to speak for the whole scene or anything.
As a member of that cluster I endorse this description.
Michael and I are sometimes-housemates and I’ve never seen or heard of any formal “Vassarite” group or institution, though he’s an important connector in the local social graph, such that I met several good friends through him.
Thank you very much for sharing. I wasn’t aware of any of these details.
It sounds like you’re saying that based on extremely sparse data you made up a Michael Vassar in your head to drive you crazy. More generally, it seems like a bunch of people on this thread, most notably Scott Alexander, are attributing spooky magical powers to him. That is crazy cult behavior and I wish they would stop it.
ETA: In case it wasn’t clear, “that” = multiple people elsewhere in the comments attributing spooky mind control powers to Vassar. I was trying to summarize Eric’s account concisely, because insofar as it assigns agency at all I think it does a good job assigning it where it makes sense to, with the person making the decisions.
Reading through the comments here, I perceive a pattern of short-but-strongly-worded comments from you, many of which seem to me to contain highly inflammatory insinuations while giving little impression of any investment of interpretive labor. It’s not [entirely] clear to me what your goals are, but barring said goals being very strange and inexplicable indeed, it seems to me extremely unlikely that they are best fulfilled by the discourse style you have consistently been employing.
To be clear: I am annoyed by this. I perceive your comments as substantially lower-quality than the mean, and moreover I am annoyed that they seem to be receiving engagement far in excess of what I believe they deserve, resulting in a loss of attentional resources that could be used engaging more productively (either with other commenters, or with a hypothetical-version-of-you who does not do this). My comment here is written for the purpose of registering my impressions, and making it common-knowledge among those who share said impressions (who, for the record, I predict are not few) that said impressions are, in fact, shared.
(If I am mistaken in the above prediction, I am sure the voters will let me know in short order.)
I say all of the above while being reasonably confident that you do, in fact, have good intentions. However, good intentions do not ipso facto result in good comments, and to the extent that they have resulted in bad comments, I think one should point this fact out as bluntly as possible, which is why I worded the first two paragraphs of this comment the way I did. Nonetheless, I felt it important to clarify that I do not stand against [what I believe to be] your causes here, only the way you have been going about pursuing those causes.
(For the record: I am unaffiliated with MIRI, CFAR, Leverage, MAPLE, the “Vassarites”, or the broader rationalist community as it exists in physical space. As such, I have no direct stake in this conversation; but I very much do have an interest in making sure discussion around any topics this sensitive are carried out in a mature, nuanced way.)
If you want to clarify whether I mean to insinuate something in a particular comment, you could ask, like I asked Eliezer. I’m not going to make my comments longer without a specific idea of what’s unclear, that seems pointless.
It is accurate to state that I constructed a model of him based on limited information, which subsequently contributed to my dramatic psychological collapse. Nevertheless, the reason for developing this particular model can be attributed to his interactions with me and others. This was not due to any extraordinary or mystical abilities, but rather his profound commitment to challenging individuals’ perceptions of conventional reality and mastering the most effective methods to do so.
This approach is not inherently negative. However, it must be acknowledged that for certain individuals, such an intense disruption of their perceived reality can precipitate a descent into a detrimental psychological state.
Thanks for verifying. In hindsight my comment reads as though it was condemning you in a way I didn’t mean to; sorry about that.
The thing I meant to characterize as “crazy cult behavior” was people in the comments here attributing things like what you did in your mind to Michael Vassar’s spooky mind powers. You seem to be trying to be helpful and informative here. Sorry if my comment read like a personal attack.
This can be unpacked into an alternative to the charisma theory.
Many people are looking for a reference person to tell them what to do. (This is generally consistent with the Jaynesian family of hypotheses.) High-agency people are unusually easy to refer to, because they reveal the kind of information that allows others to locate them. There’s sufficient excess demand that even if someone doesn’t issue any actual orders, if they seem to have agency, people will generalize from sparse data to try to construct a version of that person that tells them what to do.
A more culturally central example than Vassar is Dr Fauci, who seems to have mostly reasonable opinions about COVID, but is worshipped by a lot of fanatics with crazy beliefs about COVID.
The charisma hypothesis describes this as a fundamental attribute of the person being worshipped, rather than a behavior of their worshippers.
If this information isn’t too private, can you send it to me? scott@slatestarcodex.com
I have sent you the document in question. As the contents are somewhat personal, I would prefer that it not be disseminated publicly. However, I am amenable to it being shared with individuals who have a valid interest in gaining a deeper understanding of the matter.