At the risk of this looking too much like me fighting a strawman...
Cults may have a tendency to interact and pick up adaptations from each other, but it seems wrong to operate on the assumption that they’re all derivatives of one ancestral “proto-cult” or whatever. Cult leaders are not literal vampires, where you only become a cult leader by getting bit by a previous cult leader or whatever.
It’s a cultural attractor, and a cult is a social technology simple enough that it can be spontaneously re-derived. But cults can sometimes pick up or swap beliefs & virulence factors with each other, when they interact. And I do think Ziz picked up a few beliefs from the Vassarite cluster.
I can dig up cases in Ziz’s writing where Ziz has interacted with Vassar before, or may have indirectly learned things from him through Alice.
Doesn’t make Vassar directly responsible for Ziz’s actions. I think Vassar is not directly responsible for Ziz.
I do want to spell this out, because I’m reading a subtle implication here, that I want to push back against.
I like something about this formulation? No idea if you have time, but I’d be interested if you expanded on it.
I’m not convinced “high-energy” is the right phrasing, since the attributes (as I seem them) seem to be:
Diverges from current worldview
High-confidence
Expressed or uptaken, in a way that allows little space for uncertainty/wavering. May take a dark attitude on ensembling it with other worldviews.
May have a lot of internal consistency.
“Unreasonable internal consistency” is (paradoxically) sometimes a marker for reality, and sometimes a tell that something is truly mad and self-reinforcing.
Pushes a large change in behavior, and pushes it hard
The change is costly, at least under your original paradigm
The change may be sticky (& here are some possible mechanisms)
Activates morality or tribal-affiliation concerns
“If you hear X, and don’t believe X and convert X into praxis immediately… then you are our enemy and are infinitely corrupt” or similar attitudes and beliefs
Hard to get data that updates you out of the expensive behavior
ex: Ziz using revenge to try to change the incentive landscape in counterfactual/multiverse-branching universes, which you cannot directly observe? Can’t observe = no clear way to learn if this isn’t working, and update out. (I believe this is how she justifies resisting arrest, too.)
The change in behavior comes with an exhortation for you to do lots of things that spread the idea to other people.
This is sometimes an indicator for highly-contagious memes, that were selected more for virulence than usefulness to the bearer. (Not always, though.)
Leaves you with too little slack to re-evaluate what you’ve been doing, or corrupts your re-evaluation metrics.
ex: It feels like you’d need to argue with someone who is hard to argue with, or else you’ve dismissed it prematurely. That would be really bad. You model that argument as likely to go poorly, and you really don’t want to…
This sentiment shows up really commonly among people deeply affected by “reality warper” people and their beliefs? It shows up in normal circumstances, too. It seems much, much more intense in “reality warper” cases, though.
I would add that some people seem to have a tendency to take what is usually a low-energy meme in most hands, and turn it into a high-energy form? I think this is an attribute that characterizes some varieties of charisma, and is common among “reality warpers.”
(Awkwardly, I think “mapping high-minded ideas to practical behaviors” is also an incredibly useful attribute of highly-practical highly-effective people? Good leaders are often talented at this subskill, not just bad ones. Discernment in what ideas you take seriously, can make a really big difference in the outcomes, here.)
Some varieties of couching or argumentation will push extreme change in behavior and action, harder than others, for the same idea. Some varieties of receptivity and listening, seem more likely to uptake ideas as high-energy memes.
I feel like Pascal’s Mugging is related, but not the only case. Ex: Under Utilitarianism, you can also justify a costly behavior by arguing from very high certainty of a moderate benefit. However, this is usually not as sticky, and it is more likely to rapidly right itself if future data disputes the benefit.