As weird as the situation got, with people being afraid of demonic subprocesses being implanted by other people, there were also psychotic breaks involving demonic subprocess narratives around MIRI and CFAR
[...]
As a consequence, the people most mentally concerned with strange social metaphysics were marginalized, and had more severe psychoses with less community support, hence requiring normal psychiatric hospitalization.
That sounds to me like you are saying that people who were talking about demons got marginalized. To me that’s not a sign of MIRI/CFAR being culty, but what most people would expect from a group of rationalists. It might have been a wrong decision not to take people who talk about demons more seriously to address their issues, but it doesn’t match the error type of what’s culty.
If I’m misunderstanding what you are saying, can you clarify?
There’s an important problem here which Jessica described in some detail in a more grounded way than the “demons” frame:
As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors (“know-how”) and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver’s conscious goals. According to IFS-like psychological models, it’s common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it’s hard to rule out based on physics or psychology.
If we’re confused about a problem like Friendly AI, it’s preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.
-”Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.”
I feel like “talking confusedly” here means “talking in a way that no one else can understand”. If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.
At least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.
It’s not particularly a sign of being “culty” but my main point was that it worked out worse for the people involved, overall, so it doesn’t make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics.
I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they’re badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time.
It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.
So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.
And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.
It’s a guess based on limited information, obviously. I tagged it as in inference. It’s not just based on public information, it’s also based on having talked with some ex-Leverage people. I don’t like that you’re considering it really important for ex-Leverage people to say things were “really bad” for them while discouraging me from saying things about how bad my own (and others’) experiences were, that’s optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I’ll revise my estimate if I get sufficient evidence in the other direction.
That sounds to me like you are saying that people who were talking about demons got marginalized. To me that’s not a sign of MIRI/CFAR being culty, but what most people would expect from a group of rationalists. It might have been a wrong decision not to take people who talk about demons more seriously to address their issues, but it doesn’t match the error type of what’s culty.
If I’m misunderstanding what you are saying, can you clarify?
There’s an important problem here which Jessica described in some detail in a more grounded way than the “demons” frame:
If we’re confused about a problem like Friendly AI, it’s preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.
-”Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.”
I feel like “talking confusedly” here means “talking in a way that no one else can understand”. If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.
At least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.
It’s not particularly a sign of being “culty” but my main point was that it worked out worse for the people involved, overall, so it doesn’t make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics.
I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they’re badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time.
It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.
So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.
And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.
It’s a guess based on limited information, obviously. I tagged it as in inference. It’s not just based on public information, it’s also based on having talked with some ex-Leverage people. I don’t like that you’re considering it really important for ex-Leverage people to say things were “really bad” for them while discouraging me from saying things about how bad my own (and others’) experiences were, that’s optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I’ll revise my estimate if I get sufficient evidence in the other direction.