But I’m just confused—if there’s agreement that MIRI isn’t particularly bad about this, then this seems to mostly preclude environmental attribution and suggest personal attribution?
Suppose it’s really common in normal corporations for someone to be given ridiculous assignments by their boss and that this leads to mental illness at a high rate. Each person at a corporation like this would have a specific story of how their boss gave them a really ridiculous assignment and this caused them mental problems. That specific story in each case would be a causal model (if they hadn’t received that assignment or had anything similar to that happen, maybe they wouldn’t have that issue). This is all the case even if most corporations have this sort of thing happen.
In a sense, everything is caused by everything. If not for certain specifics of the physical constants, the universe as we know it wouldn’t exist. If cosmic rays would strike you in just the right ways, it could probably prevent psychosis. Etc. Further, since causality is not directly observable, even when there isn’t a real causal relationship, it’s possible to come up with a specific story where there is.
This leads to a problem for attributing One True Causal Story; which one to pick? Probably we shouldn’t feel restricted to only having one, as multiple frames may be relevant. But clearly we need some sort of filter.
Probably the easiest way to get a filter is by looking at applications. E.g., there’s the application of, which social environment should you join? Which presumably is about the relative effects of the different environments on a person. I don’t think this most closely aligns with your point, though.
Probably an application near to you is, how should rationalist social environments be run? (You’re advocating for something more like Leverage, in certain respects?) Here one doesn’t necessarily need to compare across actual social environments; one can consider counterfactual ones too. However, for this a cost/benefit analysis becomes important; how difficult would a change be to implement, and how much would it help with the mental health problems?
This is hard to deduce, and so it becomes tempting to use comparisons across actual social environments as a proxy. E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid. And if most people don’t get severe mental illnesses, then that puts a limit to how bad the ridiculous assignments can be on their own. So it doesn’t obviously pass a cost-benefit test.
Another thing one could look at is how well the critics are doing; are they implementing something better? Here again I’m looking at it from afar, so it’s hard for me to know. But I do have one private semi-direct signal: I am friends with one of the people in your social circles, and I talked to them recently, and they seemed to be doing really badly. Though I don’t know enough to judge too confidently; maybe your group isn’t actually focusing on making something more healthy, and so we wouldn’t expect people to be doing better; maybe the person I’m thinking of isn’t too deeply involved with what you’re doing; maybe it’s just a random downswing and doesn’t mean much in the general picture; maybe it’s just this person; I don’t know.
But if nobody else is implementing something better, then that feels like reasons to be skeptical about the causal assignment here. Though there’s two things to note; first, that this could very well fit with the general Vassarite model—the reason implementing better things might be hard might be that “conflict theory is right” (hard to come up with a better explanation, though people have tried...). And secondly, that neither of the above make it particularly relevant to attribute causality to the individuals involved, since they are inherently about the environment.
So what does make it relevant to attribute causality to the individuals involved? Well, there’s a third purpose: Updating. As an outside observer, I guess that’s the most natural mindset for me to enter. Given that these posts were written and publicized, how should I change my beliefs about the world? I should propagate evidence upwards through the causal links to these events. Why were these accusations raised, by this person, against this organization, on this forum, at this time? Here, everything is up for grabs, whether it’s attribution to a person or an organization; but also, here things tend to be much more about the variance; things that are constant in society do not get updated much by this sort of information.
Anyway, my girlfriend is telling me to go to bed now, so I can’t continue this post. I will probably be back tomorrow.
(You’re advocating for something more like Leverage, in certain respects?)
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help. I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
This seems to me to be endorsing “updating” as a purpose; evidence flows up the causal links (and down the causal links, but for this purpose the upwards direction is more important). So I will be focusing on that purpose here. The most interesting causal links are then the ones which imply the biggest updates.
Which I suppose is a very subjective thing? It depends heavily not just on the evidence one has about this case, but also on the prior beliefs about psychosis, organizational structure, etc..
In theory, the updates should tend to bring everybody closer to some consensus, but the direction of change may vary wildly from person to person, depending on how they differ from that consensus. Though in practice, I’m already very essentialist, and my update is in an essentialist direction, so that doesn’t seem to cash out.
(… or does it? One thing I’ve been essentialist about is that I’ve been skeptical that “cPTSD” is a real thing caused by trauma, rather than some more complicated genetic thing. But the stories from especially Leverage and also to an extent MIRI have made me update enormously hard in favor of trauma being able to cause those sorts of mental problems—under specific conditions. I guess there’s an element of, on the more ontological/theoretical level, people might converge, but people’s preexisting ontological/theoretical beliefs may cause their assessments of the situation to diverge.)
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help.
My phrasing might have been overly strong, since you would endorse a lot of what Leverage does, due to it being cultish. What I meant is that one thing you seem to have endorsed is that one thing you seem to have endorsed is talking more about “objects” and such.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
I agree that this is a rather general argument, but it’s not supposed to stand on its own. The structure of my argument isn’t “MIRI is normal here so it’s probably hard to change, so the post isn’t actionable”, it’s “It’s dubious things happened exactly as the OP describes, MIRI is normal here so it’s probably hard to change, it’s hard to know whether the changes implied would even work because they’re entirely hypothetical, the social circle raising the critique does not seem to be able to use their theory to fix their own mental health, so the post isn’t actionable”.
(I will send you a PM with the name of the person in your social circle who seemed to currently be doing terribly, so you can say whether you think I am misinterpreting the situation around them.)
None of these would, individually, be a strong argument. Even together they’re not a knockdown argument. But these limitations do make it very difficult for me to make much of it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
Yes. I don’t really read biographies or history, and mostly don’t read the news, for quite similar reasons. When I do, I always try to keep selection biases and interpretation biases strongly in mind.
I have gradually become more and more aware of the problems with this, but I also have the belief that excessive focus on these sorts of things lead to people overinterpreting everything. There’s probably a balance to be struck.
Suppose it’s really common in normal corporations for someone to be given ridiculous assignments by their boss and that this leads to mental illness at a high rate. Each person at a corporation like this would have a specific story of how their boss gave them a really ridiculous assignment and this caused them mental problems. That specific story in each case would be a causal model (if they hadn’t received that assignment or had anything similar to that happen, maybe they wouldn’t have that issue). This is all the case even if most corporations have this sort of thing happen.
In a sense, everything is caused by everything. If not for certain specifics of the physical constants, the universe as we know it wouldn’t exist. If cosmic rays would strike you in just the right ways, it could probably prevent psychosis. Etc. Further, since causality is not directly observable, even when there isn’t a real causal relationship, it’s possible to come up with a specific story where there is.
This leads to a problem for attributing One True Causal Story; which one to pick? Probably we shouldn’t feel restricted to only having one, as multiple frames may be relevant. But clearly we need some sort of filter.
Probably the easiest way to get a filter is by looking at applications. E.g., there’s the application of, which social environment should you join? Which presumably is about the relative effects of the different environments on a person. I don’t think this most closely aligns with your point, though.
Probably an application near to you is, how should rationalist social environments be run? (You’re advocating for something more like Leverage, in certain respects?) Here one doesn’t necessarily need to compare across actual social environments; one can consider counterfactual ones too. However, for this a cost/benefit analysis becomes important; how difficult would a change be to implement, and how much would it help with the mental health problems?
This is hard to deduce, and so it becomes tempting to use comparisons across actual social environments as a proxy. E.g. if most people get ridiculous assignments by their boss, then that probably means there is some reason why that’s very hard to avoid. And if most people don’t get severe mental illnesses, then that puts a limit to how bad the ridiculous assignments can be on their own. So it doesn’t obviously pass a cost-benefit test.
Another thing one could look at is how well the critics are doing; are they implementing something better? Here again I’m looking at it from afar, so it’s hard for me to know. But I do have one private semi-direct signal: I am friends with one of the people in your social circles, and I talked to them recently, and they seemed to be doing really badly. Though I don’t know enough to judge too confidently; maybe your group isn’t actually focusing on making something more healthy, and so we wouldn’t expect people to be doing better; maybe the person I’m thinking of isn’t too deeply involved with what you’re doing; maybe it’s just a random downswing and doesn’t mean much in the general picture; maybe it’s just this person; I don’t know.
But if nobody else is implementing something better, then that feels like reasons to be skeptical about the causal assignment here. Though there’s two things to note; first, that this could very well fit with the general Vassarite model—the reason implementing better things might be hard might be that “conflict theory is right” (hard to come up with a better explanation, though people have tried...). And secondly, that neither of the above make it particularly relevant to attribute causality to the individuals involved, since they are inherently about the environment.
So what does make it relevant to attribute causality to the individuals involved? Well, there’s a third purpose: Updating. As an outside observer, I guess that’s the most natural mindset for me to enter. Given that these posts were written and publicized, how should I change my beliefs about the world? I should propagate evidence upwards through the causal links to these events. Why were these accusations raised, by this person, against this organization, on this forum, at this time? Here, everything is up for grabs, whether it’s attribution to a person or an organization; but also, here things tend to be much more about the variance; things that are constant in society do not get updated much by this sort of information.
Anyway, my girlfriend is telling me to go to bed now, so I can’t continue this post. I will probably be back tomorrow.
Not really? I think even if Leverage turned out better in some ways that doesn’t mean switching to their model would help. I’m primarily not attempting to make policy recommendations here, I’m attempting to output the sort of information a policy-maker could take into account as empirical observations.
This is also why the “think about applications” point doesn’t seem that relevant; lots of people have lots of applications, and they consult different information sources (e.g. encyclopedias, books), each of which isn’t necessarily specialized to their application.
That seems like a fully general argument against trying to fix common societal problems? I mean, how do you expect people ever made society better in the past?
In any case, even if it’s hard to avoid, it helps to know that it’s happening and is possibly a bottleneck on intellectual productivity; if it’s a primary constraint then Theory of Constraints suggests focusing a lot of attention on it.
It seems like the general mindset you’re taking here might imply that it’s useless to read biographies, news reports, history, and accounts of how things were invented/discovered, on the basis that whoever writes it has a lot of leeway in how they describe the events, although I’m not sure if I’m interpreting you correctly.
This seems to me to be endorsing “updating” as a purpose; evidence flows up the causal links (and down the causal links, but for this purpose the upwards direction is more important). So I will be focusing on that purpose here. The most interesting causal links are then the ones which imply the biggest updates.
Which I suppose is a very subjective thing? It depends heavily not just on the evidence one has about this case, but also on the prior beliefs about psychosis, organizational structure, etc..
In theory, the updates should tend to bring everybody closer to some consensus, but the direction of change may vary wildly from person to person, depending on how they differ from that consensus. Though in practice, I’m already very essentialist, and my update is in an essentialist direction, so that doesn’t seem to cash out.
(… or does it? One thing I’ve been essentialist about is that I’ve been skeptical that “cPTSD” is a real thing caused by trauma, rather than some more complicated genetic thing. But the stories from especially Leverage and also to an extent MIRI have made me update enormously hard in favor of trauma being able to cause those sorts of mental problems—under specific conditions. I guess there’s an element of, on the more ontological/theoretical level, people might converge, but people’s preexisting ontological/theoretical beliefs may cause their assessments of the situation to diverge.)
My phrasing might have been overly strong, since you would endorse a lot of what Leverage does, due to it being cultish. What I meant is that one thing you seem to have endorsed is that one thing you seem to have endorsed is talking more about “objects” and such.
I agree that this is a rather general argument, but it’s not supposed to stand on its own. The structure of my argument isn’t “MIRI is normal here so it’s probably hard to change, so the post isn’t actionable”, it’s “It’s dubious things happened exactly as the OP describes, MIRI is normal here so it’s probably hard to change, it’s hard to know whether the changes implied would even work because they’re entirely hypothetical, the social circle raising the critique does not seem to be able to use their theory to fix their own mental health, so the post isn’t actionable”.
(I will send you a PM with the name of the person in your social circle who seemed to currently be doing terribly, so you can say whether you think I am misinterpreting the situation around them.)
None of these would, individually, be a strong argument. Even together they’re not a knockdown argument. But these limitations do make it very difficult for me to make much of it.
Yes. I don’t really read biographies or history, and mostly don’t read the news, for quite similar reasons. When I do, I always try to keep selection biases and interpretation biases strongly in mind.
I have gradually become more and more aware of the problems with this, but I also have the belief that excessive focus on these sorts of things lead to people overinterpreting everything. There’s probably a balance to be struck.