I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
I think you’re very very wrong about that.
Fair enough. Thanks for the conversation!