Is this a fictional example? You imagine that this happens, but I’m skeptical. Who would see just one post on LessWrong or the EA forum about GiveWell, and thenceforth not see or update on any further info about GiveWell?
I don’t know about Yair’s example, but it’s possible they just miss the rebuttal. They’d see the criticism, not happen to log onto EA forum on the days when Givewell’s response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.
It’s concerning if, as seems to be the case, people are making important decisions based on “a couple main points and a background negative feeling left in their brains”.
But people do make decisions like this, which is a well known psychological result. We’ve got to live with it, and it’s not givewells fault that people are like this.
I expect people to shrug and say “yeah, but look, that’s just how the world works, people get ingrained opinions, we have to work around that”. My response is: why on earth are you forfeiting victory without thoroughly discussing the problem?
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there.
Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
Yeah, there’s always going to be tradeoffs. I’d just think that if someone was going to allocate $100,000 of donations, or decide where to work, based on something they saw in a blog post, then they’d e.g. go and recheck the blog post to see if someone responded with a convincing counterargument.
A lot of it is more subtle and continuous than that: when someone is trying to decide where to give do I point them towards Founders Pledge’s writeups? This depends a lot on what I think of Founders Pledge overall: I’ve seen some things that make me positive on them, some that make me negative, and like most orgs I have a complicated view. As a large language model trained by human I don’t have full records on the provenance of all my views, and even if I did checking hundreds of posts for updates before giving an informal recommendation would be all out of proportion.
Okay, but like, it sounds like you’re saying: we should package information together into packets so that if someone randomly selects one packet, it’s a low-variance estimate of the truth; that way, people who spread opinions based on viewing a very small number of packets of information. This seems like a really really low benefit for a significant cost, so it’s a bad norm.
That’s not what I’m saying. Most information is not consumed by randomly selecting packets, so optimizing for that kind of consumption is pretty useless. In writing a comment here it’s fine to assume people have read the original post and the chain of parent comments, and generally fine to assume they’ve read the rest of the comments. On the other hand “top level” things are often read individually, and there I do think putting more thought into how it stands on its own is worth it.
Even setting aside the epistemic benefits of making it more likely that someone will see both the original post and the response, though, the social benefits are significant. I think a heads up is still worth it even if we only consider how the org will be under strong pressure to respond immediately once the criticism is public, and the negative effects that has on the people responsible for producing that response.
I dunno man. If I imagine someone who’s sort of peripheral to EA but knows a lot about X, and they see EA org-X doing silly stuff with X, and they write a detailed post, only to have it downvoted due to the norm… I expect that to cut off useful information far more than prevent {misconceptions among people who would have otherwise had usefully true and detailed models}.
I agree that would be pretty bad. This is a norm I’m pushing for people within the EA community, though, and I don’t think we should be applying it to external criticism? For example, when a Swedish newspaper reported that FLI had committed to fund a pro-nazi group this was cross-posted to the forum. I don’t think downvoting that discussion on the basis that FLI hadn’t had a chance to respond yet would have been reasonable at all.
I also don’t think downvoting is a good way of handling violations of this norm. Instead I want to build the norm positively, by people including at the ends of their posts “I sent a draft to org for review” and orgs saying “thanks for giving us time to prepare a response” in their responses. To the extent that there’s any negative enforcement I like Jason’s suggestion, that posts where the subject didn’t get advance notice could get a pinned mod comment.
Is this a fictional example? You imagine that this happens, but I’m skeptical. Who would see just one post on LessWrong or the EA forum about GiveWell, and thenceforth not see or update on any further info about GiveWell?
I don’t know about Yair’s example, but it’s possible they just miss the rebuttal. They’d see the criticism, not happen to log onto EA forum on the days when Givewell’s response is on the top of the forum, and update only on the criticism because by a week or two later, many people probably just have a couple main points and a background negative feeling left in their brains.
It’s concerning if, as seems to be the case, people are making important decisions based on “a couple main points and a background negative feeling left in their brains”.
But people do make decisions like this, which is a well known psychological result. We’ve got to live with it, and it’s not givewells fault that people are like this.
As I said in my (unedited) toplevel comment:
The latter point does not seem to follow the prior.
I do not know of any achievable plan to make a majority of the world’s habitants rational.
“We’ve got to” implies there’s only one possibility.
We can also just ignore the bad decision makers?
We can go extinct?
We can have a world autocrat in the future?
We can… do a lot of things other than just live with it?
A fictional example.
Anecdote of me (not super rationalist-practiced, also just at times dumb) - I sometimes discover stuff I briefly took to be true in passing to be false later. Feels like there’s an edge of truth/falsehoods that we investigate pretty loosely but still use a heuristic of some valence of true/false maybe a bit too liberally at times.
What happens when you have to make a decision that would depend on stuff like that?
I am unaware of those decisions at the time. I imagine people are some degree of ‘making decisions under uncertainty’, even if that uncertainty could be resolved by info somewhere out there. Perhaps there’s some optimization of how much time you spend looking into something and how right you could expect to be?
Yeah, there’s always going to be tradeoffs. I’d just think that if someone was going to allocate $100,000 of donations, or decide where to work, based on something they saw in a blog post, then they’d e.g. go and recheck the blog post to see if someone responded with a convincing counterargument.
A lot of it is more subtle and continuous than that: when someone is trying to decide where to give do I point them towards Founders Pledge’s writeups? This depends a lot on what I think of Founders Pledge overall: I’ve seen some things that make me positive on them, some that make me negative, and like most orgs I have a complicated view. As a
large language model trained byhuman I don’t have full records on the provenance of all my views, and even if I did checking hundreds of posts for updates before giving an informal recommendation would be all out of proportion.Okay, but like, it sounds like you’re saying: we should package information together into packets so that if someone randomly selects one packet, it’s a low-variance estimate of the truth; that way, people who spread opinions based on viewing a very small number of packets of information. This seems like a really really low benefit for a significant cost, so it’s a bad norm.
That’s not what I’m saying. Most information is not consumed by randomly selecting packets, so optimizing for that kind of consumption is pretty useless. In writing a comment here it’s fine to assume people have read the original post and the chain of parent comments, and generally fine to assume they’ve read the rest of the comments. On the other hand “top level” things are often read individually, and there I do think putting more thought into how it stands on its own is worth it.
Even setting aside the epistemic benefits of making it more likely that someone will see both the original post and the response, though, the social benefits are significant. I think a heads up is still worth it even if we only consider how the org will be under strong pressure to respond immediately once the criticism is public, and the negative effects that has on the people responsible for producing that response.
I dunno man. If I imagine someone who’s sort of peripheral to EA but knows a lot about X, and they see EA org-X doing silly stuff with X, and they write a detailed post, only to have it downvoted due to the norm… I expect that to cut off useful information far more than prevent {misconceptions among people who would have otherwise had usefully true and detailed models}.
I agree that would be pretty bad. This is a norm I’m pushing for people within the EA community, though, and I don’t think we should be applying it to external criticism? For example, when a Swedish newspaper reported that FLI had committed to fund a pro-nazi group this was cross-posted to the forum. I don’t think downvoting that discussion on the basis that FLI hadn’t had a chance to respond yet would have been reasonable at all.
I also don’t think downvoting is a good way of handling violations of this norm. Instead I want to build the norm positively, by people including at the ends of their posts “I sent a draft to org for review” and orgs saying “thanks for giving us time to prepare a response” in their responses. To the extent that there’s any negative enforcement I like Jason’s suggestion, that posts where the subject didn’t get advance notice could get a pinned mod comment.