Worried that I might already be a post-rationalist. I’m very interested in minimizing miscommunication, and helping people through the uncanny valley of rationality. Feel free to pm me about either of those things.
Hazard
Does Jessica’s Anti-Normativity post or Ben’s Can Crimes be Discussed Literally & Guilt, Shame, Depravity posts make sense to you? If there’s specific posts you want to talk about not making sense / not being clear what the point is, I’m down to chat about them.
For anyone curious about what the sPoOkY and mYsTeRiOuS Michael Vassar actually thinks about various shit, many of his friends have blogs and write about what they chat about, and he’s also been on several long form podcasts.
https://naturalhazard.xyz/ben_jess_sarah_starter_pack
https://open.spotify.com/episode/1lJY2HJNttkwwmwIn3kyIA?si=em0lqkPaRzeZ-ctQx_hfmA
https://open.spotify.com/episode/01z3WDSIHPDAOuVp1ZYUoN?si=VOtoDpw9T_CahF31WEhZXQ
https://open.spotify.com/episode/2RzlQDSwxGbjloRKqCh1xg?si=XuFZB1CtSt-FbCweHtTnUA
https://open.spotify.com/episode/33nrhLwrNJJtbZolZsTUGN?si=Sd0dZTANTpy8FS-RFhr4cQ
It’s a term Scott Alexander coined a few years ago when he was saying Jessica Taylor was crazy for thinking people could have spooky mind powers that let them exert control over others, right before he said Michael has spooky mind powers that lets him exert control over others.
For the record, I associate with Michael, and thus am very spooky. If anyone wants to make sure I’m not around them during an altered state hit me up and we can coordinate.
This post is my current recommendation for practicing getting a felt sense for the range of emotions you can be experiencing.
https://drmaciver.substack.com/p/labelling-feelings
I’m curious if what your describing is similar to what I’m describing in this post. When I was started paying more attention to emotions I’d often feel these impenetrable clouds of grey that I couldn’t discern much content from. https://naturalhazard.xyz/then_and_now
I agree that disguising one’s self as “someone who cares about X” doesn’t require being good at X, at least when you only have short contained contact with them.
I’m trying to emphasize that I don’t think Cade has made any progress in learning to “say the right things”. I think he has probably learned more individual words that are more frequent in a rationalist context than not (like the word “priors”), but it seems really unlikely that he’s gotten any better at even the grammar of rationalist communication.
Like, I’d be mediumly surprised if he, when talking to a rat, said something like “so what’s your priors on XYZ?” I’d be incredibly surprised if he said something like “there’s clearly a large inferential distance between your world model and the public’s world model, so maybe you could help point me towards what you think the cruxes might be for my next article?”
That last sentence seems like a v clear example of something that both doesn’t actually require understanding or caring about epistemology to utter, yet if I heard it I’d assume a certain orientation to epistemology and someone could falsely get me to “let my guard down”. I don’t think Cade can do things like that. And based on Zack’s convo and Vassar’s convo with him, and the amount of time and exposure he’s had to learn between the two convos, I don’t think that’s the sort of thing he’s capable of.
I might be misunderstanding, I understood the comment I was responding to as saying that Zack was helping Cade do a better job of disguising himself as someone who cared about good epistemics. Something like “if Zack keeps talking, Cade will learn to the surface level features of a good Convo about epistemology and thus, even if he still doesn’t know shit, he’ll be able to trick more people into thinking he’s someone worth talking to.”
In response to that claim, I shared an older interview of Cade to demonstrate that his been exposed to people who talk about epistemology for a while, and he did not do a convincing job of pretending to be in good faith then, and in this interview with Zack I don’t think he’s doing any better a job of seeming like he’s acting in good faith.
And while there can still be plenty of reasons to not talk to journalists, or Cade in particular, I really don’t think “you’ll enable them to mimick us better” is remotely plausible.
I can visibly see you training him, via verbal conversation, how to outperform the vast majority of journalists at talking about epistemics.
Metz doesn’t seem any better at seeming like he cares about or thinks at all about epistemics than he did in 2021.
Symbiotic would be a mutually beneficial relationship. What I described is very clearly not that
Yeah, the parasitic dynamic seems to set up the field for the scapegoating backup such that I’d expect to often find the scapegoating move in parasitic ecosystems that have been running their course for a while.
Your comment seems like an expansion on who is the party being fooled and it also points out another purpose for the obfuscation. A defense of pre-truth would be a theory that shows how it’s not deceptive and not a way to cover up a conflict. That being said I agree that an investor that plays pre-truth does want founders to lie, and it seems very plausible that they orient to their language game as a “figure it out” initiation ritual.
I’m with you on the deficiency of the signalling frame when talking about human communication and communication more generally. Skyrms and others who developed the signalling frame explicitly tried to avoid having a notion of of intentionality in order to explore questions like “how could the simplest things that still make sense to call ‘communication’ develop in systems that don’t have human level intelligence?”, which means the model has a gaping hole when trying to talk about what people do.
I wrote a post about the interplay between the intentional aspects of meaning and what you’re calling the probabilistic information. It’s doesn’t get too into the weeds, but might provoke more ideas in you.
Not quite what you’re looking for, but if you’ve got a default sense that coordination is hard, Jessica Taylor has a evocatively named post Coordination isn’t hard.
I remember at some point finding a giant messy graph that was all of The Sequences and the links between posts. I can’t track down the link, anyone remember this and have a lead?
When I was drafting my comment, the original version of the text you first quoted was, “Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about ‘HEY DON’T USE THIS TO SCAPEGOAT’ (which people are totally capable of ignoring)”, guess I should have left that in there. I don’t think it’s uncommon to ignore such disclaimers, I do think it actively opposes behaviors and discourse norms I wish to see in the world.
I agree that putting a “I’m not trying to blame anyone” disclaimer can be a pragmatic rhetorical move for someone attempting to scapegoat. There’s an alternate timeline version of Jessica that wrote this post as a well crafted, well defended rhetorical attack, where the literal statements in the post all clearly say “don’t fucking scapegoat anyone, you fools” but all the associative and impressionistic “dark implications” (Vaniver’s language) say “scapegoat CFAR/MIRI!” I want to draw your attention to the fact that for a potential dark implication to do anything, you need people who can pick up that signal. For it to be an effective rhetorical move, you need a critical mass of people who are well practiced in ignoring literal speech, who understand on some level that the details don’t matter, and are listening in for “who should we blame?”
To be clear, I think there is such a critical mass! I think this is very unfortunate! (though not awkward, as Scott put it) There was a solid 2+ days where Scott and Vaniver’s insistence on this being a game of “Scapegoat Vassar vs scapegoat CFAR/MIRI” totally sucked me in, and instead of reading the contents of anyone’s comments I was just like “shit, who’s side do I join? How bad would it be if people know I hung out with Vassar once? I mean I really loved my time at CFAR, but I’m also friends with Ben and Jess. Fuck, but I also think Eli is a cool guy! Shit!” That mode of thinking I engaged in is a mode that can’t really get me what I want, which is larger and larger groups of people that understand scapegoating dynamics and related phenomena.
This also seems to strong to me. I expect that many movement EAs will read the post Zoe’s and think “well, that’s enough information for me to never have anything to do with Geoff or Leverage.” This isn’t because they’re not interested in justice, it’s because they don’t have time time or the interest to investigate every allegation, so they’re using some rough heuristics and policies such as “if something looks sufficiently like a dangerous cult, don’t even bother giving it the benefit of the doubt.”
Okay, I think my statement was vague enough to be mistaken for a statement I think is too strong. Though I expect you might consider my clarification too strong as well :)
I was thinking about the “in any way that matters” part. I can see how that implies a sort of disregard for justice that spans across time. Or more specifically, I can see how you would think it implies that certain conversations you’ve had with EA friends were impossible, or that they were lying/confabulating the whole convo, and you don’t think that’s true. I don’t think that’s the case either. I’m thinking about it as more piece-wise behavior. One will sincerely care about justice, but in that moment where they read Jess’s post, ignore the giant disclaimer about scapegoating, and try to scapegoat MIRI/CFAR/Leverage, in that particular moment the cognitive processes generating their actions aren’t aligned with justice, and are working against it. Almost like an “anti-justice traumatic flashback” but most of the time it’s much more low-key and less intense than what you will read about in the literature on flashback. Malcolm Ocean does a great job of describing this sort of “falling into a dream” in his post Dream Mashups (his post is not about scapegoating, its about ending up running a cognitive algo that hurts you without noticing).
To be clear, I not saying such behavior is contemptible, blameworthy, bad, or to-be-scapegoated. I am saying it’s very damaging, and I want more people to understand how it works. I want to understand how it works more. I would love to not get sucked into as many anti-justice dreams where I actively work against creating the sort of world I want to live in.
So when I said “not aligned with justice in any important relevant way”, that was more a statement about “how often and when will people fall into these dreams?” Sorta like the concept of “fair weather friend”, my current hunch is that people fall into scapegoating behavior exactly when it would be most helpful for them to not. While reading a post about “here’s some problems I see in this institution that is at the core of our community” is exactly when it is most important for one’s general atemporal commitment to justice to be present in one’s actual thoughts and actions.
This makes a lot of sense. I can notice ways in which I generally feels more threatened by social invalidation than actual concrete threats of violence.
I’m not sure what writing this comment felt like for you, but from my view it seems like you’ve noticed a lot of the dynamics about scapegoating and info-suppression fields that Ben and Jessica have blogged about in the past (and occasionally pointed out in the course of these comments, though less clearly). I’m going to highlight a few things.
I do think that Jessica writing this post will predictably have reputational externalities that I don’t like and I think are unjustified.
Broadly, I think that onlookers not paying much attention would have concluded from Zoe’s post that Leverage is a cult that should be excluded from polite society, and hearing of both Zoe’s and Jessica’s post, is likely to conclude that Leverage and MIRI are similarly bad cults.
I totally agree with this. I also think that to the degree to which an “onlooker not paying much attention” concludes this is the degree to which they are habituated to engaging with discussion of wrongdoing as scapegoating games. This seems to be very common (though incredibly damaging) behavior. Scapegoating works on the associative/impressionistic logic of “looks”, and Jessica’s post certainly makes CFAR/MIRI “look” bad. This post can be used as “material” or “fuel” for scapegoating, regardless of whether Jessica’s intent in writing it. Though it can’t be used honestly to scapegoat (if there even is such a thing). Anyone using this piece to scapegoat needs to ignore the giant upfront paragraph about “HEY, DON’T USE THIS TO SCAPEGOAT”, and has no plausible claim to doing justice, upholding rules, or caring about the truth of the matter in any important relevant sense.
(aside, from both my priors on Jess and my reading of the post it was clear to me that Jess wasn’t trying to scapegoat CFAR/MIRI. It also simply isn’t in Jess’s interests for them to be scapegoated)
Another thought: CFAR/MIRI already “look” crazy to most people who might check them out. UFAI, cryonics, acausal trade, are all things that “look” crazy. And yet we’re all able to talk about them on LW without worry about “how it looks” because over many many conversations, many sequences, blog posts, comments, etc have created a community with different common knowledge about what will result in people ganging up on you.
Something that we as a community don’t talk a lot about is power structures, coercion, emotional abuse, manipulation, etc. We don’t collectively build and share models on their mechanics and structure. As such, I think it’s expected that when “things get real” people abandon commitment to the truth in favor of “oh shit, there’s an actual conflict, I or others could be scapegoated, I am not safe, I need to protect my people from being scapegoated at all cost”.
However, I think that we mostly shouldn’t be in the business of trying to carter to bystanders who are not invested in understanding what is actually going on in detail, and we especially should not compromise the discourse of people who are invested in understanding.
I totally agree, and I think if you explore this sense you already sorta see how commitment to making sure things “look okay” quickly becomes a commitment to suppress information about what happened.
(aside, these are some of Ben’s post that have been most useful to me for understanding some of this stuff)
I found many things you shared useful. I also expect that because of your style/tone you’ll get down voted :(
Thinking about people I know who’ve met Vassar, the ones who weren’t brought up to go to college* seem to have no problem with him and show no inclination to worship him as a god or freak out about how he’s spooky or cultish; to them, he’s obviously just a guy with an interesting perspective.
This is very interesting to me! I’d like to hear more about how the two group’s behavior looks diff, and also your thoughts on what’s the difference that makes the difference, what are the pieces of “being brought up to go to college” that lead to one class of reactions?
Okay, you made me realize I’ve been wrong about Michael. Your comment is the single most credible instance I’ve seen of him causing acute psychosis in an individual. Well, I guess it’s more the idea of Michael (and Ben), because no one who reads the linked blog posts or listens to the linked podcasts could mistakenly think your comments had anything to do with their content. I mean, it’s possible a casual observer might mistake your earlier characterization of their content as, “isn’t this just saying people can sometimes be hypocrites?” as merely garden variety functional illiteracy, but if they knew anything about this website and the high verbal IQ it selects for, they’d know to rule that possibility out immediately.
I’d also forgive someone for mistaking your comments for garden variety tribalism and treating arguments as your soldiers, but again, one needs to take into account the context of the website we’re on. There’s no way Viliam could expect to stay in good standing with this community if he pretended he couldn’t read while also making up a totally fabricated version of what others are saying. Like, maybe if the texts/audio in question were hidden and he had privileged access to them he could leverage his reputation and get people to take his word for it, but they’re all on the open internet and people can just read them! He would obviously have zero reason to expect anyone would cover for such flagrant nonsense.
So as unlikely as it seemed on priors, it really does look like Viliam has gone temporarily psychotic, with Michael Vassar as the proximal cause. Honestly this kinda scares me. I previously thought this was just dumb made up drama, but if people really can make people temporarily psychotic like this, it’s a huge worldview shift for me and I’m gonna have to take some time to integrate it. I hope at the very least that you’re in a safe environment and have loved ones that can help you out.