It’s not particularly a sign of being “culty” but my main point was that it worked out worse for the people involved, overall, so it doesn’t make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics.
I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they’re badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time.
It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.
So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.
And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.
It’s a guess based on limited information, obviously. I tagged it as in inference. It’s not just based on public information, it’s also based on having talked with some ex-Leverage people. I don’t like that you’re considering it really important for ex-Leverage people to say things were “really bad” for them while discouraging me from saying things about how bad my own (and others’) experiences were, that’s optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I’ll revise my estimate if I get sufficient evidence in the other direction.
It’s not particularly a sign of being “culty” but my main point was that it worked out worse for the people involved, overall, so it doesn’t make that much sense to think Leverage did worse overall with their mental health issues and weird metaphysics.
I do think that Bayesian virtue, taken to its logical conclusion, would consider these hypotheses to the point of thinking about whether they explain sensory data better than alternative hypotheses, and not reject them because they’re badly-formalized and unproven at the start; there is an exploratory stage in generating new theories, where the initial explanations are usually wrong in important places, but can lead to more refined theories over time.
It seems like one of the problems with ‘the Leverage situation’ is that collectively, we don’t know how bad it was for people involved. There are many key Leverage figures who don’t seem to have gotten involved in these conversations (anonymously or not) or ever spoken publicly or in groups connected to this community about their experience. And, we have evidence that some of them have been hiding their post-Leverage experiences from each other.
So I think making the claim that the MIRI/CFAR related experiences were ‘worse’ because there exists evidence of psychiatric hospitalisation etc is wrong and premature.
And also? I’m sort of frustrated that you’re repeatedly saying that -right now-, when people are trying to encourage stories from a group of people who we might expect to have felt insecure, paranoid, and gaslit about whether anything bad ‘actually happened’ to them.
It’s a guess based on limited information, obviously. I tagged it as in inference. It’s not just based on public information, it’s also based on having talked with some ex-Leverage people. I don’t like that you’re considering it really important for ex-Leverage people to say things were “really bad” for them while discouraging me from saying things about how bad my own (and others’) experiences were, that’s optimizing for a predetermined conclusion in opposition to actually listening to people (which could reveal unexpected information). I’ll revise my estimate if I get sufficient evidence in the other direction.