Why not assign an environmental cause in a case where one exists and I have evidence about it?
As I understand it, both sides of the issue agree that MIRI isn’t uniquely bad when it comes to frame control and such. MIRI might have some unique themes, e.g. AI torturing people instead of the devil torturing people, or lying about the promise of an approach for AI instead of lying about the promise of an approach for business, but it’s not some unique evil by MIRI. (Please correct me if I’m misunderstanding your accusations here.)
As such, it’s not that MIRI, compared to other environments, caused this. Of course, this does not mean that MIRI didn’t in some more abstract sense cause it, in the sense that one could imagine some MIRI’ which was like MIRI but didn’t have the features you mention as contributors. But the viability of creating such an organization, both cost-wise and success-wise, is unclear, and because the organization doesn’t exist but is instead a counterfactual imagination, it’s not even clear that it would have the effects you hope it would have. So assigning the cause of MIRI not being MIRI’ seems to require a much greater leap of faith.
“Vassarites are prone to psychosis” is obviously fundamental attribution error, that’s not how physical causality works. There will be specific environmental causes in “normal” cases of trauma as well.
Not so obvious to me. There were tons of people are in these environments with no psychosis at all, as far as I know? Meanwhile fundamental attribution error is about when people attribute something to a person where there is a situational factor that would have caused everyone else to act in the same way.
Of course you could attribute this to subtleties about the social relations, who is connected to who and respected by who. But this doesn’t seem like an obviously correct attribution to me. Maybe if I knew more about the social relations, it would be.
I think you’re trying to use a regression model where I would use something more like a Bayes net. This makes some sense in that I had direct personal experience that includes lots of nodes in the Bayes net, and you don’t, so you’re going to use a lower-resolution model than me. But people who care about the Bayes net I lived in can update on the information I’m presenting.
There were tons of people are in these environments with no psychosis at all, as far as I know?
I think the rate might be higher for former MIRI employees in particular, but I’m not sure how to evaluate; the official base rate is that around 3% of people have or will experience a psychotic break in their lifetime. If there are at least 3 psychotic breaks in former MIRI employees then MIRI would need to have had 100 employees to match the general population rate (perhaps more if the psychotic breaks happened within a few years of each other, and in the general population they’re pretty spread out), although there’s noise here and the official stat could be wrong.
Anyway, “MIRI is especially likely to cause psychosis” (something that could be output by the type of regression model you’re considering) is not the main claim I’m making.
Part of what’s strange about attributing things to “Vassarites” in a regression model is that part of how “Vassarites” (including Vassar) became that way is through environmental causes. E.g. I listened to Michael’s ideas more because I was at MIRI and Michael was pointing out features of MIRI and the broader situation that seemed relevant to me given my observations, and other people around didn’t seem comparably informationally helpful. I have no family history of schizophrenia (that I know of), only bipolar disorder.
I think the rate might be higher for former MIRI employees in particular, but I’m not sure how to evaluate; the official base rate is that around 3% of people have or will experience a psychotic break in their lifetime. If there are at least 3 psychotic breaks in former MIRI employees then MIRI would need to have had 100 employees to match the general population rate, although there’s noise here and the official stat could be wrong.
Isn’t the rate of general mental illness also higher, e.g. autism or ADHD, which is probably not caused by MIRI? (Both among MIRI and among rationalists and rationalist-adj people more generally, e.g. I myself happen to be autistic, ADHD, GD, and probably also have one or two personality disorders; and I have a family history of BPD.) Almost all mental illnesses are correlated so if you select for some mental illness you’d expect to get other to go along with it.
Anyway, “MIRI is especially likely to cause psychosis” (something that could be output by the type of regression model you’re considering) is not the main claim I’m making.
Part of what’s strange about attributing things to “Vassarites” in a regression model is that part of how “Vassarites” (including Vassar) became that way is through environmental causes. E.g. I listened to Michael’s ideas more because I was at MIRI and Michael was pointing out features of MIRI and the broader situation that seemed relevant to me given my observations, and other people around didn’t seem comparably informationally helpful. I have no family history of schizophrenia (that I know of), only bipolar disorder.
I am very sympathetic to the idea that the Vassarites are not nearly as environmentally causal to the psychosis as they might look. It’s the same principle as above; Vassar selected for psychosis, being critical of MIRI, etc., so you’d expect higher rates even if he had no effect on it.
(I think that’s a major problem with naive regressions, taking something that’s really a consequence and adding it to the regression as if it was a cause.)
I think you’re trying to use a regression model where I would use a Bayes net. This makes some sense in that I had direct personal experience that includes lots of nodes in the Bayes net, and you don’t, so you’re going to use a lower-resolution model than me. But people who care about the Bayes net I lived in can update on the information I’m presenting.
It’s tricky because I try to read the accounts, but they’re all going to be filtered through people’s perception, and they’re all going to assume a lot of background knowledge that I don’t have, due to not having observed it. I could put in a lot of effort to figure out what’s true and false, representative and unrepresentative, but it’s probably not possible for me due to various reasons. I could also just ignore the whole drama. But I’m just confused—if there’s agreement that MIRI isn’t particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?
I wouldn’t necessarily say that I use a regression model, as e.g. I’m aware of the problem with just blaming Vassar for causing other’s psychosis. There’s definitely some truth to me being forced to use a lower-resolution model. And that can be terrible. Partly I just have a very strong philosophical leaning towards essentialism, but also partly it just, from afar, seems to be the best explanation.
I’m just confused—if there’s agreement that MIRI isn’t particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?
I’ve read Moral Mazes and worked a few years in the corporate world at Fannie Mae. I’ve also talked a lot with Jessica and others in the MIRI cluster who had psychotic breaks. It seems to me like what happens to middle managers is in some important sense even worse than a psychotic break. Jessica, Zack, and Devi seem to be able to represent their perspectives now, to be able to engage with the hypothesis that some activity is in good faith, to consider symmetry considerations instead of reflexively siding with transgressors.
Ordinary statistical methods—and maybe empiricism more generally—cannot shed light on pervasive, systemic harms, when we lack the capacity to perform controlled experiments on many such systems. In such cases, we instead need rationalist methods, i.e. thinking carefully about mechanisms from first principles. We can also try to generalize efficiently from microcosms of the general phenomenon, e.g. generalizing from how people respond to unusually blatant abuse by individuals or institutions, to make inferences about the effects of pervasive abuse.
But corporate employers are not the only context people live in. My grandfather was an independent insurance broker for much of his career. I would expect someone working for a low-margin business in a competitive industry to sustain much less psychological damage, though I would also expect them to be paid less and maybe have a more strenuous job. I don’t think the guys out on the street a few blocks from my apartment selling fruit for cash face anything like what Jessica faced, and I’d be somewhat surprised if they ended up damaged the way the people in Moral Mazes seem damaged.
But I’m just confused—if there’s agreement that MIRI isn’t particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?
Suppose it’s really common in normal corporations for someone to be given ridiculous assignments by their boss and that this leads to mental illness at a high rate. Each person at a corporation like this would have a specific story of how their boss gave them a really ridiculous assignment and this caused them mental problems. That specific story in each case would be a causal model (if they hadn’t received that assignment or had anything similar to that happen, maybe they wouldn’t have that issue). This is all the case even if most corporations have this sort of thing happen.
In a sense, everything is caused by everything. If not for certain specifics of the physical constants, the universe as we know it wouldn’t exist. If cosmic rays would strike you in just the right ways, it could probably prevent psychosis. Etc. Further, since causality is not directly observable, even when there isn’t a real causal relationship, it’s possible to come up with a specific story where there is.
This leads to a problem for attributing One True Causal Story; which one to pick? Probably we shouldn’t feel restricted to only having one, as multiple frames may be relevant. But clearly we need some sort of filter.
Probably the easiest way to get a filter is by looking at applications. E.g., there’s the application of, which social environment should you join? Which presumably is about the relative effects of the different environments on a person. I don’t think this most closely aligns with your point, though.
Probably an application near to you is, how should rationalist social environments be run? (You’re advocating for something more like Leverage, in certain respects?) Here one doesn’t necessarily need to compare across actual social environments; one can consider counterfactual ones too. However, for this a cost/benefit analysis becomes important; how difficult would a change be to implement, and how much would it help with the mental health problems?
This is hard to deduce, and so it becomes tempting to use comparisons across actual social environments as a proxy. E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid. And if most people don’t get severe mental illnesses, then that puts a limit to how bad the ridiculous assignments can be on their own. So it doesn’t obviously pass a cost-benefit test.
Another thing one could look at is how well the critics are doing; are they implementing something better? Here again I’m looking at it from afar, so it’s hard for me to know. But I do have one private semi-direct signal: I am friends with one of the people in your social circles, and I talked to them recently, and they seemed to be doing really badly. Though I don’t know enough to judge too confidently; maybe your group isn’t actually focusing on making something more healthy, and so we wouldn’t expect people to be doing better; maybe the person I’m thinking of isn’t too deeply involved with what you’re doing; maybe it’s just a random downswing and doesn’t mean much in the general picture; maybe it’s just this person; I don’t know.
But if nobody else is implementing something better, then that feels like reasons to be skeptical about the causal assignment here. Though there’s two things to note; first, that this could very well fit with the general Vassarite model—the reason implementing better things might be hard might be that “conflict theory is right” (hard to come up with a better explanation, though people have tried...). And secondly, that neither of the above make it particularly relevant to attribute causality to the individuals involved, since they are inherently about the environment.
So what does make it relevant to attribute causality to the individuals involved? Well, there’s a third purpose: Updating. As an outside observer, I guess that’s the most natural mindset for me to enter. Given that these posts were written and publicized, how should I change my beliefs about the world? I should propagate evidence upwards through the causal links to these events. Why were these accusations raised, by this person, against this organization, on this forum, at this time? Here, everything is up for grabs, whether it’s attribution to a person or an organization; but also, here things tend to be much more about the variance; things that are constant in society do not get updated much by this sort of information.
Anyway, my girlfriend is telling me to go to bed now, so I can’t continue this post. I will probably be back tomorrow.
(You’re advocating for something more like Leverage, in certain respects?)
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help. I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
This seems to me to be endorsing “updating” as a purpose; evidence flows up the causal links (and down the causal links, but for this purpose the upwards direction is more important). So I will be focusing on that purpose here. The most interesting causal links are then the ones which imply the biggest updates.
Which I suppose is a very subjective thing? It depends heavily not just on the evidence one has about this case, but also on the prior beliefs about psychosis, organizational structure, etc..
In theory, the updates should tend to bring everybody closer to some consensus, but the direction of change may vary wildly from person to person, depending on how they differ from that consensus. Though in practice, I’m already very essentialist, and my update is in an essentialist direction, so that doesn’t seem to cash out.
(… or does it? One thing I’ve been essentialist about is that I’ve been skeptical that “cPTSD” is a real thing caused by trauma, rather than some more complicated genetic thing. But the stories from especially Leverage and also to an extent MIRI have made me update enormously hard in favor of trauma being able to cause those sorts of mental problems—under specific conditions. I guess there’s an element of, on the more ontological/theoretical level, people might converge, but people’s preexisting ontological/theoretical beliefs may cause their assessments of the situation to diverge.)
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help.
My phrasing might have been overly strong, since you would endorse a lot of what Leverage does, due to it being cultish. What I meant is that one thing you seem to have endorsed is that one thing you seem to have endorsed is talking more about “objects” and such.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
I agree that this is a rather general argument, but it’s not supposed to stand on its own. The structure of my argument isn’t “MIRI is normal here so it’s probably hard to change, so the post isn’t actionable”, it’s “It’s dubious things happened exactly as the OP describes, MIRI is normal here so it’s probably hard to change, it’s hard to know whether the changes implied would even work because they’re entirely hypothetical, the social circle raising the critique does not seem to be able to use their theory to fix their own mental health, so the post isn’t actionable”.
(I will send you a PM with the name of the person in your social circle who seemed to currently be doing terribly, so you can say whether you think I am misinterpreting the situation around them.)
None of these would, individually, be a strong argument. Even together they’re not a knockdown argument. But these limitations do make it very difficult for me to make much of it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
Yes. I don’t really read biographies or history, and mostly don’t read the news, for quite similar reasons. When I do, I always try to keep selection biases and interpretation biases strongly in mind.
I have gradually become more and more aware of the problems with this, but I also have the belief that excessive focus on these sorts of things lead to people overinterpreting everything. There’s probably a balance to be struck.
As I understand it, both sides of the issue agree that MIRI isn’t uniquely bad when it comes to frame control and such. MIRI might have some unique themes, e.g. AI torturing people instead of the devil torturing people, or lying about the promise of an approach for AI instead of lying about the promise of an approach for business, but it’s not some unique evil by MIRI. (Please correct me if I’m misunderstanding your accusations here.)
As such, it’s not that MIRI, compared to other environments, caused this. Of course, this does not mean that MIRI didn’t in some more abstract sense cause it, in the sense that one could imagine some MIRI’ which was like MIRI but didn’t have the features you mention as contributors. But the viability of creating such an organization, both cost-wise and success-wise, is unclear, and because the organization doesn’t exist but is instead a counterfactual imagination, it’s not even clear that it would have the effects you hope it would have. So assigning the cause of MIRI not being MIRI’ seems to require a much greater leap of faith.
Not so obvious to me. There were tons of people are in these environments with no psychosis at all, as far as I know? Meanwhile fundamental attribution error is about when people attribute something to a person where there is a situational factor that would have caused everyone else to act in the same way.
Of course you could attribute this to subtleties about the social relations, who is connected to who and respected by who. But this doesn’t seem like an obviously correct attribution to me. Maybe if I knew more about the social relations, it would be.
I think you’re trying to use a regression model where I would use something more like a Bayes net. This makes some sense in that I had direct personal experience that includes lots of nodes in the Bayes net, and you don’t, so you’re going to use a lower-resolution model than me. But people who care about the Bayes net I lived in can update on the information I’m presenting.
I think the rate might be higher for former MIRI employees in particular, but I’m not sure how to evaluate; the official base rate is that around 3% of people have or will experience a psychotic break in their lifetime. If there are at least 3 psychotic breaks in former MIRI employees then MIRI would need to have had 100 employees to match the general population rate (perhaps more if the psychotic breaks happened within a few years of each other, and in the general population they’re pretty spread out), although there’s noise here and the official stat could be wrong.
Anyway, “MIRI is especially likely to cause psychosis” (something that could be output by the type of regression model you’re considering) is not the main claim I’m making.
Part of what’s strange about attributing things to “Vassarites” in a regression model is that part of how “Vassarites” (including Vassar) became that way is through environmental causes. E.g. I listened to Michael’s ideas more because I was at MIRI and Michael was pointing out features of MIRI and the broader situation that seemed relevant to me given my observations, and other people around didn’t seem comparably informationally helpful. I have no family history of schizophrenia (that I know of), only bipolar disorder.
Isn’t the rate of general mental illness also higher, e.g. autism or ADHD, which is probably not caused by MIRI? (Both among MIRI and among rationalists and rationalist-adj people more generally, e.g. I myself happen to be autistic, ADHD, GD, and probably also have one or two personality disorders; and I have a family history of BPD.) Almost all mental illnesses are correlated so if you select for some mental illness you’d expect to get other to go along with it.
I am very sympathetic to the idea that the Vassarites are not nearly as environmentally causal to the psychosis as they might look. It’s the same principle as above; Vassar selected for psychosis, being critical of MIRI, etc., so you’d expect higher rates even if he had no effect on it.
(I think that’s a major problem with naive regressions, taking something that’s really a consequence and adding it to the regression as if it was a cause.)
It’s tricky because I try to read the accounts, but they’re all going to be filtered through people’s perception, and they’re all going to assume a lot of background knowledge that I don’t have, due to not having observed it. I could put in a lot of effort to figure out what’s true and false, representative and unrepresentative, but it’s probably not possible for me due to various reasons. I could also just ignore the whole drama. But I’m just confused—if there’s agreement that MIRI isn’t particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?
I wouldn’t necessarily say that I use a regression model, as e.g. I’m aware of the problem with just blaming Vassar for causing other’s psychosis. There’s definitely some truth to me being forced to use a lower-resolution model. And that can be terrible. Partly I just have a very strong philosophical leaning towards essentialism, but also partly it just, from afar, seems to be the best explanation.
I’ve read Moral Mazes and worked a few years in the corporate world at Fannie Mae. I’ve also talked a lot with Jessica and others in the MIRI cluster who had psychotic breaks. It seems to me like what happens to middle managers is in some important sense even worse than a psychotic break. Jessica, Zack, and Devi seem to be able to represent their perspectives now, to be able to engage with the hypothesis that some activity is in good faith, to consider symmetry considerations instead of reflexively siding with transgressors.
Ordinary statistical methods—and maybe empiricism more generally—cannot shed light on pervasive, systemic harms, when we lack the capacity to perform controlled experiments on many such systems. In such cases, we instead need rationalist methods, i.e. thinking carefully about mechanisms from first principles. We can also try to generalize efficiently from microcosms of the general phenomenon, e.g. generalizing from how people respond to unusually blatant abuse by individuals or institutions, to make inferences about the effects of pervasive abuse.
But corporate employers are not the only context people live in. My grandfather was an independent insurance broker for much of his career. I would expect someone working for a low-margin business in a competitive industry to sustain much less psychological damage, though I would also expect them to be paid less and maybe have a more strenuous job. I don’t think the guys out on the street a few blocks from my apartment selling fruit for cash face anything like what Jessica faced, and I’d be somewhat surprised if they ended up damaged the way the people in Moral Mazes seem damaged.
Suppose it’s really common in normal corporations for someone to be given ridiculous assignments by their boss and that this leads to mental illness at a high rate. Each person at a corporation like this would have a specific story of how their boss gave them a really ridiculous assignment and this caused them mental problems. That specific story in each case would be a causal model (if they hadn’t received that assignment or had anything similar to that happen, maybe they wouldn’t have that issue). This is all the case even if most corporations have this sort of thing happen.
In a sense, everything is caused by everything. If not for certain specifics of the physical constants, the universe as we know it wouldn’t exist. If cosmic rays would strike you in just the right ways, it could probably prevent psychosis. Etc. Further, since causality is not directly observable, even when there isn’t a real causal relationship, it’s possible to come up with a specific story where there is.
This leads to a problem for attributing One True Causal Story; which one to pick? Probably we shouldn’t feel restricted to only having one, as multiple frames may be relevant. But clearly we need some sort of filter.
Probably the easiest way to get a filter is by looking at applications. E.g., there’s the application of, which social environment should you join? Which presumably is about the relative effects of the different environments on a person. I don’t think this most closely aligns with your point, though.
Probably an application near to you is, how should rationalist social environments be run? (You’re advocating for something more like Leverage, in certain respects?) Here one doesn’t necessarily need to compare across actual social environments; one can consider counterfactual ones too. However, for this a cost/benefit analysis becomes important; how difficult would a change be to implement, and how much would it help with the mental health problems?
This is hard to deduce, and so it becomes tempting to use comparisons across actual social environments as a proxy. E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid. And if most people don’t get severe mental illnesses, then that puts a limit to how bad the ridiculous assignments can be on their own. So it doesn’t obviously pass a cost-benefit test.
Another thing one could look at is how well the critics are doing; are they implementing something better? Here again I’m looking at it from afar, so it’s hard for me to know. But I do have one private semi-direct signal: I am friends with one of the people in your social circles, and I talked to them recently, and they seemed to be doing really badly. Though I don’t know enough to judge too confidently; maybe your group isn’t actually focusing on making something more healthy, and so we wouldn’t expect people to be doing better; maybe the person I’m thinking of isn’t too deeply involved with what you’re doing; maybe it’s just a random downswing and doesn’t mean much in the general picture; maybe it’s just this person; I don’t know.
But if nobody else is implementing something better, then that feels like reasons to be skeptical about the causal assignment here. Though there’s two things to note; first, that this could very well fit with the general Vassarite model—the reason implementing better things might be hard might be that “conflict theory is right” (hard to come up with a better explanation, though people have tried...). And secondly, that neither of the above make it particularly relevant to attribute causality to the individuals involved, since they are inherently about the environment.
So what does make it relevant to attribute causality to the individuals involved? Well, there’s a third purpose: Updating. As an outside observer, I guess that’s the most natural mindset for me to enter. Given that these posts were written and publicized, how should I change my beliefs about the world? I should propagate evidence upwards through the causal links to these events. Why were these accusations raised, by this person, against this organization, on this forum, at this time? Here, everything is up for grabs, whether it’s attribution to a person or an organization; but also, here things tend to be much more about the variance; things that are constant in society do not get updated much by this sort of information.
Anyway, my girlfriend is telling me to go to bed now, so I can’t continue this post. I will probably be back tomorrow.
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help. I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
This seems to me to be endorsing “updating” as a purpose; evidence flows up the causal links (and down the causal links, but for this purpose the upwards direction is more important). So I will be focusing on that purpose here. The most interesting causal links are then the ones which imply the biggest updates.
Which I suppose is a very subjective thing? It depends heavily not just on the evidence one has about this case, but also on the prior beliefs about psychosis, organizational structure, etc..
In theory, the updates should tend to bring everybody closer to some consensus, but the direction of change may vary wildly from person to person, depending on how they differ from that consensus. Though in practice, I’m already very essentialist, and my update is in an essentialist direction, so that doesn’t seem to cash out.
(… or does it? One thing I’ve been essentialist about is that I’ve been skeptical that “cPTSD” is a real thing caused by trauma, rather than some more complicated genetic thing. But the stories from especially Leverage and also to an extent MIRI have made me update enormously hard in favor of trauma being able to cause those sorts of mental problems—under specific conditions. I guess there’s an element of, on the more ontological/theoretical level, people might converge, but people’s preexisting ontological/theoretical beliefs may cause their assessments of the situation to diverge.)
My phrasing might have been overly strong, since you would endorse a lot of what Leverage does, due to it being cultish. What I meant is that one thing you seem to have endorsed is that one thing you seem to have endorsed is talking more about “objects” and such.
I agree that this is a rather general argument, but it’s not supposed to stand on its own. The structure of my argument isn’t “MIRI is normal here so it’s probably hard to change, so the post isn’t actionable”, it’s “It’s dubious things happened exactly as the OP describes, MIRI is normal here so it’s probably hard to change, it’s hard to know whether the changes implied would even work because they’re entirely hypothetical, the social circle raising the critique does not seem to be able to use their theory to fix their own mental health, so the post isn’t actionable”.
(I will send you a PM with the name of the person in your social circle who seemed to currently be doing terribly, so you can say whether you think I am misinterpreting the situation around them.)
None of these would, individually, be a strong argument. Even together they’re not a knockdown argument. But these limitations do make it very difficult for me to make much of it.
Yes. I don’t really read biographies or history, and mostly don’t read the news, for quite similar reasons. When I do, I always try to keep selection biases and interpretation biases strongly in mind.
I have gradually become more and more aware of the problems with this, but I also have the belief that excessive focus on these sorts of things lead to people overinterpreting everything. There’s probably a balance to be struck.