What are some examples of this being a real concern
I didn’t give this example in the original because it could look like calling out a specific individual author, which would be harsh. But it does seem like this post needs an example, so I’ve linked it here. Please be nice; I’m not giving them as an example of a bad person, just someone whose post triggered bad dynamics.
The problem of people not getting the right information here seems hard to solve. For example, if you see just the initial post and none of the followup I think it’s probably right to be somewhat more skeptical of FP. After the follow-up we have (a) people who saw only the original, (b) people who saw only the follow-up, and (c) people who saw both. And even if the follow-up gets as many readers as the original and everyone updates correctly on the information they have, groups (b) and (c) are fine but (a) is not.
how orgs are using time-sensitive PR to deceive the social space.
Is this something you see EA orgs doing? Or something you’re worried they would do?
I appreciate you sharing the example. I read the post, and I’m confused. It seems fine to me. Like, I’d guess (though could be wrong) that if I read this without context, I’d sort of shrug and be like “huh, I guess FP doesn’t write super detailed grant reports”. It doesn’t read like a corruption expose to me.
If someone is lying or distorting, then that can cause unjustified harm. If such a person could be convinced to run things by orgs beforehand, that would perhaps be good, though not obviously.
If someone is NOT lying or distorting, then I think it’s good for them to share in an efficient way, i.e. post without friction from running things by orgs. If there’s harm, it’s not directional. If people are just randomly not getting information, then that’s bad, but it doesn’t imply sharing less information would be good. If there’s lock-in and info cascades, that’s bad, but the answer isn’t “don’t share information”.
You wrote:
When posting critical things publicly, however, unless it’s very time-sensitive we should be letting orgs review a draft first
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
Is this something you see EA orgs doing? Or something you’re worried they would do?
It’s something I’d worry they would do if they were investing in PR this way. It’s something I worry that they currently do because the EA social dynamics have tended in the past to motivate lying: https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html It’s something that I worry EAs would coordinate on to silence discussion of, by shunning and divesting from people who bring it up.
I think some people will read the article as “they should have given more details publicly”, but if that was what the author was trying to say they could have written a pretty different post. Something like:
Founders Pledge Should Be Publishing More Detailed Grant Reports
In July 2022 Founders Pledge granted $1.6M to Qvist Consulting. FP hasn’t published their reasoning for this grant and doesn’t link any references. This is not a level of transparency we should accept for a fund that accepts donations from the public, and especially one listed as top rated by GWWC.
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
I think this would be positive, but putting it into practice is hard. If I’m writing something about Founders Pledge I don’t know who their biggest critics are, so who do I share the post with? If that were the only problem I could imagine a system where each org has a notifications mailing list where anyone can post “here’s a draft of something about you I’m thinking of publishing” and anyone interested in forthcoming posts can subscribe. But while I would trust Founders Pledge to behave honorably with my draft (not share it, not scoop me, etc) I have significantly less trust for large unvetted groups.
If you had a proposal for how to do this, though, I’d be pretty interested!
EA social dynamics have tended in the past to motivate lying
I didn’t find that post very convincing when it came out, and still don’t. I think the Forum discussion was pretty good, especially @Raemon’s comments. And Sarah’s followup is also worth reading.
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
I think this would be positive, but putting it into practice is hard.
Ok. Yeah I don’t see an obvious implementation; I was mainly trying to understand your position, though maybe it would actually be good.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
I do think it is literally that, and I think that’s probably how the author intended it. But I think many people won’t read it that way?
You may well be right. I think what’s important to me here, is that the fact be highlighted that the cost here is coming from this property of readers. Like, I don’t like the norm proposal, but if it were “made a norm” (whatever that means), I’d want it to be emphatically tagged with ”… and this norm is only here because of the general and alarming state of affairs where people will read things for tone in addition to content, do groupthink, do info cascades, and take things as adversarial moves in a group conflict calling for side-choosing”.
I didn’t give this example in the original because it could look like calling out a specific individual author, which would be harsh. But it does seem like this post needs an example, so I’ve linked it here. Please be nice; I’m not giving them as an example of a bad person, just someone whose post triggered bad dynamics.
This is something I’ve been thinking about for a while, but it was prompted by the recent On what basis did Founder’s Pledge disperse $1.6 mil. to Qvist Consulting from its Climate Change Fund? It reads as a corruption exposé, and I think Founders Pledge judged correctly that if they didn’t get their response out quickly a lot of people would have shifted their views in ways that (the original poster agrees) would have been wrong.
The problem of people not getting the right information here seems hard to solve. For example, if you see just the initial post and none of the followup I think it’s probably right to be somewhat more skeptical of FP. After the follow-up we have (a) people who saw only the original, (b) people who saw only the follow-up, and (c) people who saw both. And even if the follow-up gets as many readers as the original and everyone updates correctly on the information they have, groups (b) and (c) are fine but (a) is not.
Is this something you see EA orgs doing? Or something you’re worried they would do?
I appreciate you sharing the example. I read the post, and I’m confused. It seems fine to me. Like, I’d guess (though could be wrong) that if I read this without context, I’d sort of shrug and be like “huh, I guess FP doesn’t write super detailed grant reports”. It doesn’t read like a corruption expose to me.
If someone is lying or distorting, then that can cause unjustified harm. If such a person could be convinced to run things by orgs beforehand, that would perhaps be good, though not obviously.
If someone is NOT lying or distorting, then I think it’s good for them to share in an efficient way, i.e. post without friction from running things by orgs. If there’s harm, it’s not directional. If people are just randomly not getting information, then that’s bad, but it doesn’t imply sharing less information would be good. If there’s lock-in and info cascades, that’s bad, but the answer isn’t “don’t share information”.
You wrote:
Would you also call for positive posts to be run by an org’s biggest critics? I could see that as a reasonable position.
It’s something I’d worry they would do if they were investing in PR this way. It’s something I worry that they currently do because the EA social dynamics have tended in the past to motivate lying: https://srconstantin.github.io/2017/01/17/ea-has-a-lying-problem.html It’s something that I worry EAs would coordinate on to silence discussion of, by shunning and divesting from people who bring it up.
I think some people will read the article as “they should have given more details publicly”, but if that was what the author was trying to say they could have written a pretty different post. Something like:
Instead, they walk the reader through a series of investigative steps in a way that reads like someone uncovering a corrupt grant.
I think this would be positive, but putting it into practice is hard. If I’m writing something about Founders Pledge I don’t know who their biggest critics are, so who do I share the post with? If that were the only problem I could imagine a system where each org has a notifications mailing list where anyone can post “here’s a draft of something about you I’m thinking of publishing” and anyone interested in forthcoming posts can subscribe. But while I would trust Founders Pledge to behave honorably with my draft (not share it, not scoop me, etc) I have significantly less trust for large unvetted groups.
If you had a proposal for how to do this, though, I’d be pretty interested!
I didn’t find that post very convincing when it came out, and still don’t. I think the Forum discussion was pretty good, especially @Raemon’s comments. And Sarah’s followup is also worth reading.
Huh. It just sounded like “I thought I’d find some information and then I didn’t” to me. Maybe I’m just being tone deaf. Like, it sounded like a (boring and result-less) stack trace of some investigation.
Ok. Yeah I don’t see an obvious implementation; I was mainly trying to understand your position, though maybe it would actually be good.
Thanks for the links.
I do think it is literally that, and I think that’s probably how the author intended it. But I think many people won’t read it that way?
You may well be right. I think what’s important to me here, is that the fact be highlighted that the cost here is coming from this property of readers. Like, I don’t like the norm proposal, but if it were “made a norm” (whatever that means), I’d want it to be emphatically tagged with ”… and this norm is only here because of the general and alarming state of affairs where people will read things for tone in addition to content, do groupthink, do info cascades, and take things as adversarial moves in a group conflict calling for side-choosing”.