I expect much of the harm comes from people updating an appropriate amount from the post, not seeing the org/person’s reply because they never had to make any important decisions on the subject, then noticing later that many others have updated similarly, and subsequently doing a group think. Then the person/org is considered really very bad by the community, so other orgs don’t want to associate with them, and open phil no longer wants to fund them because they’re all scaredy cats they care about their social status.
To my knowledge this hasn’t actually happened, though possibly this is because nobody wants to be talking about the relevant death-spiraled orgs.
Seems more likely the opposite is at play with many EA orgs like OpenPhil or Anthropic (Edit: in the sense that imo many are over-enthusiastic about them. Not necessarily to the same degree, and possibly for reasons orthogonal to the particular policy being discussed here), so I share your confusion about why orgs would force their employees to work over the weekend to correct misconceptions about them. I think most just want to seem professional and correct to others, and this value isn’t directly related to the core altruisticcmission (unless you buy the signaling hypothesis of altruism).
Yeah, doing a group think seems to increase this cost. (And of course the group think is the problem here, and playing to the group think is some sort of corruption, it seems to me.)
Suppose that it actually were the case that OP and so on would shun orgs based on groupthink rather than based on real reasons. Now, what should an org do, if faced with the possibility of groupthink deciding the org is bad? An obvious response is to try to avoid that. But I’m saying that this response is a sort of corruption. A better response would be to say: Okay, bye! An even better response would be to try to call out these dynamics, in the hopes of redeeming the groupthinkers. The way the first response is corruption, is
You’re wasting time on trying to get people to like you, but those people have neutered their ability to get good stuff done, by engaging in this enforced groupthink.
You’re distorting your thoughts, confusing yourself between real reality and social reality.
You’re signaling capitulation to everyone else, saying, “Yes, even people as originally well-intentioned as we were, even such people will eventually see the dark truth, that all must be sacrificed to the will of groupthink”. This also applies internally to the org.
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
I expect much of the harm comes from people updating an appropriate amount from the post, not seeing the org/person’s reply because they never had to make any important decisions on the subject, then noticing later that many others have updated similarly, and subsequently doing a group think. Then the person/org is considered really very bad by the community, so other orgs don’t want to associate with them, and open phil no longer wants to fund them because
they’re all scaredy catsthey care about their social status.To my knowledge this hasn’t actually happened, though possibly this is because nobody wants to be talking about the relevant death-spiraled orgs.
Seems more likely the opposite is at play with many EA orgs like OpenPhil or Anthropic (Edit: in the sense that imo many are over-enthusiastic about them. Not necessarily to the same degree, and possibly for reasons orthogonal to the particular policy being discussed here), so I share your confusion about why orgs would force their employees to work over the weekend to correct misconceptions about them. I think most just want to seem professional and correct to others, and this value isn’t directly related to the core altruisticcmission (unless you buy the signaling hypothesis of altruism).
Yeah, doing a group think seems to increase this cost. (And of course the group think is the problem here, and playing to the group think is some sort of corruption, it seems to me.)
I don’t understand this part of your response. Can you expand?
Suppose that it actually were the case that OP and so on would shun orgs based on groupthink rather than based on real reasons. Now, what should an org do, if faced with the possibility of groupthink deciding the org is bad? An obvious response is to try to avoid that. But I’m saying that this response is a sort of corruption. A better response would be to say: Okay, bye! An even better response would be to try to call out these dynamics, in the hopes of redeeming the groupthinkers. The way the first response is corruption, is
You’re wasting time on trying to get people to like you, but those people have neutered their ability to get good stuff done, by engaging in this enforced groupthink.
You’re distorting your thoughts, confusing yourself between real reality and social reality.
You’re signaling capitulation to everyone else, saying, “Yes, even people as originally well-intentioned as we were, even such people will eventually see the dark truth, that all must be sacrificed to the will of groupthink”. This also applies internally to the org.
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
I think you’re very very wrong about that.
Fair enough. Thanks for the conversation!