What’s wrong with an org posting a response later? Why do they have to scramble? This isn’t a rhetorical question. I imagine two of the reasons are:
If there’s a misconception caused by the post, then until the misconception is corrected, there could be incorrect harm to the org. This seems like a reasonable concern in theory, but is this really a concern in practice? What are some examples of this being a real concern (if not for point 2 below)?
If there’s a misconception caused by the post, then the misconception might become ingrained in the social space and be more difficult for the org to correct.
If point 2 is necessary for your argument, then, this seems like the problem is about people getting ingrained opinions that can’t be later corrected. Why does that happen? Are the processes driven by people who have really ingrained opinions actually processes that good-doing orgs want to be a part of? I expect people to shrug and say “yeah, but look, that’s just how the world works, people get ingrained opinions, we have to work around that”. My response is: why on earth are you forfeiting victory without thoroughly discussing the problem?
A more convincing argument would have to discuss how orgs are using time-sensitive PR to deceive the social space.
This comment feels like wishful thinking to me. Like, I think our communities are broadly some of the more truth-seeking communities out there. And yet, they have flaws common to human communities, such as both 1 and 2. And yet, I want to engage with these communities, and to cooperate with them. That cooperation is made much harder if actors blithely ignore these dynamics by:
Publishing criticism that could wait
Pretend that they can continue working on that strategy doc they were working on, while there’s an important discussion centered on their organization’s moral character happening in public
I have a long experience of watching conversations about orgs evolve. I advise my colleagues to urgently reply. I don’t think this is an attempt to manipulate anyone.
What are some examples of this being a real concern
I didn’t give this example in the original because it could look like calling out a specific individual author, which would be harsh. But it does seem like this post needs an example, so I’ve linked it here. Please be nice; I’m not giving them as an example of a bad person, just someone whose post triggered bad dynamics.
The problem of people not getting the right information here seems hard to solve. For example, if you see just the initial post and none of the followup I think it’s probably right to be somewhat more skeptical of FP. After the follow-up we have (a) people who saw only the original, (b) people who saw only the follow-up, and (c) people who saw both. And even if the follow-up gets as many readers as the original and everyone updates correctly on the information they have, groups (b) and (c) are fine but (a) is not.
how orgs are using time-sensitive PR to deceive the social space.
Is this something you see EA orgs doing? Or something you’re worried they would do?
I appreciate you sharing the example. I read the post, and I’m confused. It seems fine to me. Like, I’d guess (though could be wrong) that if I read this without context, I’d sort of shrug and be like “huh, I guess FP doesn’t write super detailed grant reports”. It doesn’t read like a corruption expose to me.
If someone is lying or distorting, then that can cause unjustified harm. If such a person could be convinced to run things by orgs beforehand, that would perhaps be good, though not obviously.
If someone is NOT lying or distorting, then I think it’s good for them to share in an efficient way, i.e. post without friction from running things by orgs. If there’s harm, it’s not directional. If people are just randomly not getting information, then that’s bad, but it doesn’t imply sharing less information would be good. If there’s lock-in and info cascades, that’s bad, but the answer isn’t “don’t share information”.
You wrote:
When posting critical things publicly, however, unless it’s very time-sensitive we should be letting orgs review a draft first
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
Is this something you see EA orgs doing? Or something you’re worried they would do?
It’s something I’d worry they would do if they were investing in PR this way. It’s something I worry that they currently do because the EA social dynamics have tended in the past to motivate lying: https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html It’s something that I worry EAs would coordinate on to silence discussion of, by shunning and divesting from people who bring it up.
I think some people will read the article as “they should have given more details publicly”, but if that was what the author was trying to say they could have written a pretty different post. Something like:
Founders Pledge Should Be Publishing More Detailed Grant Reports
In July 2022 Founders Pledge granted $1.6M to Qvist Consulting. FP hasn’t published their reasoning for this grant and doesn’t link any references. This is not a level of transparency we should accept for a fund that accepts donations from the public, and especially one listed as top rated by GWWC.
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
I think this would be positive, but putting it into practice is hard. If I’m writing something about Founders Pledge I don’t know who their biggest critics are, so who do I share the post with? If that were the only problem I could imagine a system where each org has a notifications mailing list where anyone can post “here’s a draft of something about you I’m thinking of publishing” and anyone interested in forthcoming posts can subscribe. But while I would trust Founders Pledge to behave honorably with my draft (not share it, not scoop me, etc) I have significantly less trust for large unvetted groups.
If you had a proposal for how to do this, though, I’d be pretty interested!
EA social dynamics have tended in the past to motivate lying
I didn’t find that post very convincing when it came out, and still don’t. I think the Forum discussion was pretty good, especially @Raemon’s comments. And Sarah’s followup is also worth reading.
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
I think this would be positive, but putting it into practice is hard.
Ok. Yeah I don’t see an obvious implementation; I was mainly trying to understand your position, though maybe it would actually be good.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
I do think it is literally that, and I think that’s probably how the author intended it. But I think many people won’t read it that way?
You may well be right. I think what’s important to me here, is that the fact be highlighted that the cost here is coming from this property of readers. Like, I don’t like the norm proposal, but if it were “made a norm” (whatever that means), I’d want it to be emphatically tagged with ”… and this norm is only here because of the general and alarming state of affairs where people will read things for tone in addition to content, do groupthink, do info cascades, and take things as adversarial moves in a group conflict calling for side-choosing”.
Person posts something negative about givewell. I read it, shrug, and say I guess I’ll donate somewhere else in the future.
Two days later a rebuttal appears, which if I ever read would change my mind. But I never do read it, because I’ve moved on, and givewell isn’t hugely important to me.
Is this a fictional example? You imagine that this happens, but I’m skeptical. Who would see just one post on LessWrong or the EA forum about GiveWell, and thenceforth not see or update on any further info about GiveWell?
I don’t know about Yair’s example, but it’s possible they just miss the rebuttal. They’d see the criticism, not happen to log onto EA forum on the days when Givewell’s response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.
It’s concerning if, as seems to be the case, people are making important decisions based on “a couple main points and a background negative feeling left in their brains”.
But people do make decisions like this, which is a well known psychological result. We’ve got to live with it, and it’s not givewells fault that people are like this.
I expect people to shrug and say “yeah, but look, that’s just how the world works, people get ingrained opinions, we have to work around that”. My response is: why on earth are you forfeiting victory without thoroughly discussing the problem?
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there.
Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
Yeah, there’s always going to be tradeoffs. I’d just think that if someone was going to allocate $100,000 of donations, or decide where to work, based on something they saw in a blog post, then they’d e.g. go and recheck the blog post to see if someone responded with a convincing counterargument.
A lot of it is more subtle and continuous than that: when someone is trying to decide where to give do I point them towards Founders Pledge’s writeups? This depends a lot on what I think of Founders Pledge overall: I’ve seen some things that make me positive on them, some that make me negative, and like most orgs I have a complicated view. As a large language model trained by human I don’t have full records on the provenance of all my views, and even if I did checking hundreds of posts for updates before giving an informal recommendation would be all out of proportion.
Okay, but like, it sounds like you’re saying: we should package information together into packets so that if someone randomly selects one packet, it’s a low-variance estimate of the truth; that way, people who spread opinions based on viewing a very small number of packets of information. This seems like a really really low benefit for a significant cost, so it’s a bad norm.
That’s not what I’m saying. Most information is not consumed by randomly selecting packets, so optimizing for that kind of consumption is pretty useless. In writing a comment here it’s fine to assume people have read the original post and the chain of parent comments, and generally fine to assume they’ve read the rest of the comments. On the other hand “top level” things are often read individually, and there I do think putting more thought into how it stands on its own is worth it.
Even setting aside the epistemic benefits of making it more likely that someone will see both the original post and the response, though, the social benefits are significant. I think a heads up is still worth it even if we only consider how the org will be under strong pressure to respond immediately once the criticism is public, and the negative effects that has on the people responsible for producing that response.
I dunno man. If I imagine someone who’s sort of peripheral to EA but knows a lot about X, and they see EA org-X doing silly stuff with X, and they write a detailed post, only to have it downvoted due to the norm… I expect that to cut off useful information far more than prevent {misconceptions among people who would have otherwise had usefully true and detailed models}.
I agree that would be pretty bad. This is a norm I’m pushing for people within the EA community, though, and I don’t think we should be applying it to external criticism? For example, when a Swedish newspaper reported that FLI had committed to fund a pro-nazi group this was cross-posted to the forum. I don’t think downvoting that discussion on the basis that FLI hadn’t had a chance to respond yet would have been reasonable at all.
I also don’t think downvoting is a good way of handling violations of this norm. Instead I want to build the norm positively, by people including at the ends of their posts “I sent a draft to org for review” and orgs saying “thanks for giving us time to prepare a response” in their responses. To the extent that there’s any negative enforcement I like Jason’s suggestion, that posts where the subject didn’t get advance notice could get a pinned mod comment.
I expect much of the harm comes from people updating an appropriate amount from the post, not seeing the org/person’s reply because they never had to make any important decisions on the subject, then noticing later that many others have updated similarly, and subsequently doing a group think. Then the person/org is considered really very bad by the community, so other orgs don’t want to associate with them, and open phil no longer wants to fund them because they’re all scaredy cats they care about their social status.
To my knowledge this hasn’t actually happened, though possibly this is because nobody wants to be talking about the relevant death-spiraled orgs.
Seems more likely the opposite is at play with many EA orgs like OpenPhil or Anthropic (Edit: in the sense that imo many are over-enthusiastic about them. Not necessarily to the same degree, and possibly for reasons orthogonal to the particular policy being discussed here), so I share your confusion about why orgs would force their employees to work over the weekend to correct misconceptions about them. I think most just want to seem professional and correct to others, and this value isn’t directly related to the core altruisticcmission (unless you buy the signaling hypothesis of altruism).
Yeah, doing a group think seems to increase this cost. (And of course the group think is the problem here, and playing to the group think is some sort of corruption, it seems to me.)
Suppose that it actually were the case that OP and so on would shun orgs based on groupthink rather than based on real reasons. Now, what should an org do, if faced with the possibility of groupthink deciding the org is bad? An obvious response is to try to avoid that. But I’m saying that this response is a sort of corruption. A better response would be to say: Okay, bye! An even better response would be to try to call out these dynamics, in the hopes of redeeming the groupthinkers. The way the first response is corruption, is
You’re wasting time on trying to get people to like you, but those people have neutered their ability to get good stuff done, by engaging in this enforced groupthink.
You’re distorting your thoughts, confusing yourself between real reality and social reality.
You’re signaling capitulation to everyone else, saying, “Yes, even people as originally well-intentioned as we were, even such people will eventually see the dark truth, that all must be sacrificed to the will of groupthink”. This also applies internally to the org.
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
What’s wrong with an org posting a response later? Why do they have to scramble? This isn’t a rhetorical question. I imagine two of the reasons are:
If there’s a misconception caused by the post, then until the misconception is corrected, there could be incorrect harm to the org. This seems like a reasonable concern in theory, but is this really a concern in practice? What are some examples of this being a real concern (if not for point 2 below)?
If there’s a misconception caused by the post, then the misconception might become ingrained in the social space and be more difficult for the org to correct.
If point 2 is necessary for your argument, then, this seems like the problem is about people getting ingrained opinions that can’t be later corrected. Why does that happen? Are the processes driven by people who have really ingrained opinions actually processes that good-doing orgs want to be a part of? I expect people to shrug and say “yeah, but look, that’s just how the world works, people get ingrained opinions, we have to work around that”. My response is: why on earth are you forfeiting victory without thoroughly discussing the problem?
A more convincing argument would have to discuss how orgs are using time-sensitive PR to deceive the social space.
This comment feels like wishful thinking to me. Like, I think our communities are broadly some of the more truth-seeking communities out there. And yet, they have flaws common to human communities, such as both 1 and 2. And yet, I want to engage with these communities, and to cooperate with them. That cooperation is made much harder if actors blithely ignore these dynamics by:
Publishing criticism that could wait
Pretend that they can continue working on that strategy doc they were working on, while there’s an important discussion centered on their organization’s moral character happening in public
I have a long experience of watching conversations about orgs evolve. I advise my colleagues to urgently reply. I don’t think this is an attempt to manipulate anyone.
What are your ideas for attenuating the anti-epistemic effects of belief lock-in and group think and information cascades?
I didn’t give this example in the original because it could look like calling out a specific individual author, which would be harsh. But it does seem like this post needs an example, so I’ve linked it here. Please be nice; I’m not giving them as an example of a bad person, just someone whose post triggered bad dynamics.
This is something I’ve been thinking about for a while, but it was prompted by the recent On what basis did Founder’s Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund? It reads as a corruption exposé, and I think Founders Pledge judged correctly that if they didn’t get their response out quickly a lot of people would have shifted their views in ways that (the original poster agrees) would have been wrong.
The problem of people not getting the right information here seems hard to solve. For example, if you see just the initial post and none of the followup I think it’s probably right to be somewhat more skeptical of FP. After the follow-up we have (a) people who saw only the original, (b) people who saw only the follow-up, and (c) people who saw both. And even if the follow-up gets as many readers as the original and everyone updates correctly on the information they have, groups (b) and (c) are fine but (a) is not.
Is this something you see EA orgs doing? Or something you’re worried they would do?
I appreciate you sharing the example. I read the post, and I’m confused. It seems fine to me. Like, I’d guess (though could be wrong) that if I read this without context, I’d sort of shrug and be like “huh, I guess FP doesn’t write super detailed grant reports”. It doesn’t read like a corruption expose to me.
If someone is lying or distorting, then that can cause unjustified harm. If such a person could be convinced to run things by orgs beforehand, that would perhaps be good, though not obviously.
If someone is NOT lying or distorting, then I think it’s good for them to share in an efficient way, i.e. post without friction from running things by orgs. If there’s harm, it’s not directional. If people are just randomly not getting information, then that’s bad, but it doesn’t imply sharing less information would be good. If there’s lock-in and info cascades, that’s bad, but the answer isn’t “don’t share information”.
You wrote:
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
It’s something I’d worry they would do if they were investing in PR this way. It’s something I worry that they currently do because the EA social dynamics have tended in the past to motivate lying: https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html It’s something that I worry EAs would coordinate on to silence discussion of, by shunning and divesting from people who bring it up.
I think some people will read the article as “they should have given more details publicly”, but if that was what the author was trying to say they could have written a pretty different post. Something like:
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
I think this would be positive, but putting it into practice is hard. If I’m writing something about Founders Pledge I don’t know who their biggest critics are, so who do I share the post with? If that were the only problem I could imagine a system where each org has a notifications mailing list where anyone can post “here’s a draft of something about you I’m thinking of publishing” and anyone interested in forthcoming posts can subscribe. But while I would trust Founders Pledge to behave honorably with my draft (not share it, not scoop me, etc) I have significantly less trust for large unvetted groups.
If you had a proposal for how to do this, though, I’d be pretty interested!
I didn’t find that post very convincing when it came out, and still don’t. I think the Forum discussion was pretty good, especially @Raemon’s comments. And Sarah’s followup is also worth reading.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
Ok. Yeah I don’t see an obvious implementation; I was mainly trying to understand your position, though maybe it would actually be good.
Thanks for the links.
I do think it is literally that, and I think that’s probably how the author intended it. But I think many people won’t read it that way?
You may well be right. I think what’s important to me here, is that the fact be highlighted that the cost here is coming from this property of readers. Like, I don’t like the norm proposal, but if it were “made a norm” (whatever that means), I’d want it to be emphatically tagged with ”… and this norm is only here because of the general and alarming state of affairs where people will read things for tone in addition to content, do groupthink, do info cascades, and take things as adversarial moves in a group conflict calling for side-choosing”.
Simple example:
Person posts something negative about givewell. I read it, shrug, and say I guess I’ll donate somewhere else in the future.
Two days later a rebuttal appears, which if I ever read would change my mind. But I never do read it, because I’ve moved on, and givewell isn’t hugely important to me.
Is this a fictional example? You imagine that this happens, but I’m skeptical. Who would see just one post on LessWrong or the EA forum about GiveWell, and thenceforth not see or update on any further info about GiveWell?
I don’t know about Yair’s example, but it’s possible they just miss the rebuttal. They’d see the criticism, not happen to log onto EA forum on the days when Givewell’s response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.
It’s concerning if, as seems to be the case, people are making important decisions based on “a couple main points and a background negative feeling left in their brains”.
But people do make decisions like this, which is a well known psychological result. We’ve got to live with it, and it’s not givewells fault that people are like this.
As I said in my (unedited) toplevel comment:
The latter point does not seem to follow the prior.
I do not know of any achievable plan to make a majority of the world’s habitants rational.
“We’ve got to” implies there’s only one possibility.
We can also just ignore the bad decision makers?
We can go extinct?
We can have a world autocrat in the future?
We can… do a lot of things other than just live with it?
A fictional example.
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
What happens when you have to make a decision that would depend on stuff like that?
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there. Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
Yeah, there’s always going to be tradeoffs. I’d just think that if someone was going to allocate $100,000 of donations, or decide where to work, based on something they saw in a blog post, then they’d e.g. go and recheck the blog post to see if someone responded with a convincing counterargument.
A lot of it is more subtle and continuous than that: when someone is trying to decide where to give do I point them towards Founders Pledge’s writeups? This depends a lot on what I think of Founders Pledge overall: I’ve seen some things that make me positive on them, some that make me negative, and like most orgs I have a complicated view. As a
large language model trained byhuman I don’t have full records on the provenance of all my views, and even if I did checking hundreds of posts for updates before giving an informal recommendation would be all out of proportion.Okay, but like, it sounds like you’re saying: we should package information together into packets so that if someone randomly selects one packet, it’s a low-variance estimate of the truth; that way, people who spread opinions based on viewing a very small number of packets of information. This seems like a really really low benefit for a significant cost, so it’s a bad norm.
That’s not what I’m saying. Most information is not consumed by randomly selecting packets, so optimizing for that kind of consumption is pretty useless. In writing a comment here it’s fine to assume people have read the original post and the chain of parent comments, and generally fine to assume they’ve read the rest of the comments. On the other hand “top level” things are often read individually, and there I do think putting more thought into how it stands on its own is worth it.
Even setting aside the epistemic benefits of making it more likely that someone will see both the original post and the response, though, the social benefits are significant. I think a heads up is still worth it even if we only consider how the org will be under strong pressure to respond immediately once the criticism is public, and the negative effects that has on the people responsible for producing that response.
I dunno man. If I imagine someone who’s sort of peripheral to EA but knows a lot about X, and they see EA org-X doing silly stuff with X, and they write a detailed post, only to have it downvoted due to the norm… I expect that to cut off useful information far more than prevent {misconceptions among people who would have otherwise had usefully true and detailed models}.
I agree that would be pretty bad. This is a norm I’m pushing for people within the EA community, though, and I don’t think we should be applying it to external criticism? For example, when a Swedish newspaper reported that FLI had committed to fund a pro-nazi group this was cross-posted to the forum. I don’t think downvoting that discussion on the basis that FLI hadn’t had a chance to respond yet would have been reasonable at all.
I also don’t think downvoting is a good way of handling violations of this norm. Instead I want to build the norm positively, by people including at the ends of their posts “I sent a draft to org for review” and orgs saying “thanks for giving us time to prepare a response” in their responses. To the extent that there’s any negative enforcement I like Jason’s suggestion, that posts where the subject didn’t get advance notice could get a pinned mod comment.
I expect much of the harm comes from people updating an appropriate amount from the post, not seeing the org/person’s reply because they never had to make any important decisions on the subject, then noticing later that many others have updated similarly, and subsequently doing a group think. Then the person/org is considered really very bad by the community, so other orgs don’t want to associate with them, and open phil no longer wants to fund them because
they’re all scaredy catsthey care about their social status.To my knowledge this hasn’t actually happened, though possibly this is because nobody wants to be talking about the relevant death-spiraled orgs.
Seems more likely the opposite is at play with many EA orgs like OpenPhil or Anthropic (Edit: in the sense that imo many are over-enthusiastic about them. Not necessarily to the same degree, and possibly for reasons orthogonal to the particular policy being discussed here), so I share your confusion about why orgs would force their employees to work over the weekend to correct misconceptions about them. I think most just want to seem professional and correct to others, and this value isn’t directly related to the core altruisticcmission (unless you buy the signaling hypothesis of altruism).
Yeah, doing a group think seems to increase this cost. (And of course the group think is the problem here, and playing to the group think is some sort of corruption, it seems to me.)
I don’t understand this part of your response. Can you expand?
Suppose that it actually were the case that OP and so on would shun orgs based on groupthink rather than based on real reasons. Now, what should an org do, if faced with the possibility of groupthink deciding the org is bad? An obvious response is to try to avoid that. But I’m saying that this response is a sort of corruption. A better response would be to say: Okay, bye! An even better response would be to try to call out these dynamics, in the hopes of redeeming the groupthinkers. The way the first response is corruption, is
You’re wasting time on trying to get people to like you, but those people have neutered their ability to get good stuff done, by engaging in this enforced groupthink.
You’re distorting your thoughts, confusing yourself between real reality and social reality.
You’re signaling capitulation to everyone else, saying, “Yes, even people as originally well-intentioned as we were, even such people will eventually see the dark truth, that all must be sacrificed to the will of groupthink”. This also applies internally to the org.
I don’t have a clear opinion on the original proposal… but is it really possible to completely avoid groupthink that decides an org is bad? (I assume that “bad” in this context means something like “not worth supporting”.)
I would say that some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist. I would also agree with you that delegating all evaluation to the group level has obvious downsides.
If we accept both of those points, I think the question is more a matter of how to most productively scope the manner and degree to which individuals delegate their evaluations to a broader group, rather than a binary choice to wholly avoid (or support) such delegation.
I’m not saying don’t use group-level reasoning. I’m saying that, based on how people are advocating behaving, it seems like people expect the group-level reasoning that we currently actually have, to be hopelessly deranged. If that expectation is accurate, then this is a far worse problem than almost anything else, and we should be focusing on that. No one seems to get what I’m saying though.
Do you disagree that “some degree of group-level weeding out of unworthy organizations seems like a transparently necessary step given the sheer number of organizations that exist”? If not, how does that dynamic differ from “shun[ning] orgs based on groupthink rather than based on real reasons”?
Because groups can in theory compute real reasons. “Groups-level weeding out” sounds like an action that a group can take. One can in principle decide which actions to take based on reasons. Groupthink refers to making decisions based not on real reasons, but rather based on emergent processes that don’t particularly track truth, but instead e.g. propagate social pressures or whatever. As an example: https://en.wikipedia.org/wiki/Information_cascade
For that distinction to be relevant, individuals need to be able to distinguish whether a particular conclusion of the group is groupthink or whether it’s principled.
If the information being propagated in both cases is primarily the judgment, how does the individual group member determine which judgments are based on real reasons vs not? If the premise is that this very communication style is the problem, then how does one fix that without re-creating much of the original burden on the individual that our group-level coordination was trying to avoid?
If folks try to square this circle through a mechanism like random spot checks on rationales, then things may become eventually consistent but in many cases I think the time lag for propagating updates may be considerable. Most people would not spot check any particular decision, by definition. Anything that requires folks to repeatedly look at the group’s conclusions for all of their discarded ideas ends up being burdensome IMO. So, I have trouble seeing an obvious mechanism for folks to promptly notice that the group reverted their decision that a particular org is not worth supporting? The only possibilities I can think of involve more rigorously centralized coordination than I believe (as a loosely-informed outsider) to be currently true for EA.
The broken group-level process doesn’t solve anything, it’s broken. I don’t know how to fix it, but a first step would be thinking about the problem at all, rather than trying to ignore it or dismiss it as intractable before trying.
Okay, so you‘re defining the problem as groups transmitting too little information? Then I think a natural first step when thinking about the problem is to determine an upper bound on how much information can be effectively transmitted. My intuition is that the realistic answer for many recipients would turn out to be “not a lot more than is already being transmitted”. If I’m right about that (which is a big “if”), then we might not need much thinking beyond that point to rule out this particular framing of the problem as intractable.
I think you’re very very wrong about that.
Fair enough. Thanks for the conversation!