I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this: - I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity. - I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments. It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?
I have talked to Vassar, while he has a lot of “explicit control over conversations” which could be called charisma, I’d hypothesize that the fallout is actually from his ideas. (The charisma/intelligence making him able to credibly argue those)
My hypothesis is the following: I’ve met a lot of rationalists + adjacent people. A lot of them care very deeply about EA and AI alignment. In fact, it seems to me to be a core part of a lot of these people’s identity (“I’m an EA person, thus I’m a good person doing important work”). Two anecdotes to illustrate this:
- I’d recently argued against a committed EA person. Eventually, I started feeling almost-bad about arguing (even though we’re both self-declared rationalists!) because I’d realised that my line of reasoning questioned his entire life. His identity was built deeply on EA, his job was selected to maximize money to give to charity.
- I’d had a conversation with a few unemployed rationalist computer scientists. I suggested we might start a company together. One I got: “Only if it works on the alignment problem, everything else is irrelevant to me”.
Vassar very persuasively argues against EA and work done at MIRI/CFAR (He doesn’t disagree alignment is a problem AFAIK). Assuming people largely defined by these ideas, one can see how that could be threatening to their identity. I’ve read “I’m an evil person” from multiple people relating their “Vassar-psychosis” experience. To me it’s very easy to see how one could get there if the defining part of the identity is “I’m a good person because I work on EA/Alignment” + “EA/Aligment is a scam” arguments.
It also makes Vassar look like a genius (God), because “why wouldn’t the rest of the rationalists see the arguments”, while it’s really just a group-bias phenomenon, where the social truth of the rationalist group is that obviously EA is good and AI alignment terribly important.
This would probably predict, that the people experiencing “Vassar-psychosis” would’ve a stronger-than-average constructed identity based on EA/CFAR/MIRI?