Why GiveWell can’t recommend MIRI or anything like it
There’s an old joke about a man, head down, slowly walking in circles under the light of a street lamp. It is dark and he has lost his wallet.
A passerby offers his assistance and asks what they’re looking for. A second does the same.
Finally, this second helper asks “Is this where you lost it?”
“No,” comes the reply.
“Then why are you looking over here?”
“Because this is where the light is!”
The tendency to look for answers where they can be measured or found may also be present in psychological research on rats. We don’t really look at rats for psychological insight because we think that’s where the psychological insights are, that’s just the only place we can look! (Note, I know looking at rats is better than nothing, and we don’t only look at rats).
Likewise, GiveWell. They’ve released their new list of seven charities they recommend donating to. Six are efforts to increase health in a cheap way, and the last is direct money transfers to help people break out of poverty traps. In theory, these are the most cost-efficient producers of good in the world.
Except, not really. Technological research, especially AI, or perhaps effective educational reform, or improving the scientific community’s norms might very well be vastly more fruitful fields.
I don’t think these are all missing from GiveWell’s list only because they don’t measure up, but because, by GiveWell’s metrics, they can’t be measured at all! GiveWell has provided, perhaps, the best of the charities that can be easily measured.
What if the best charities aren’t easily measurable? Well, then they won’t just not be on the list, they can’t be on the list.
Funny you should mention that..
AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky’s Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.
They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.
Also like: here is a 4000-word evaluation of MIRI by OpenPhil. ???
All of this, and also, there are strategic considerations on GiveWell’s side: they want to be able to offer recommendations that they can defend publicly to their donor pool, which is filled with a particular mix of people looking for a particular kind of charity recommendation out of GiveWell. Directly comparing MIRI to more straightforward charities like AMF dilutes GiveWell’s brand in a way that would be strategically a bad idea for them, and these sorts of considerations are part of the reason why OpenPhil exists:
I don’t think you are actually making this argument, but this comes close to an uncharitable view of GiveWell that I strongly disagree with, which goes something like “GiveWell can’t recommend MIRI because it would look weird and be bad for their brand, even if they think that MIRI is actually the best place to donate to.” I think GiveWell / OpenPhil are fairly insensitive to considerations like this and really just want to recommend the things they actually think are best independent of public opinion. The separate branding decision seems like a clearly good idea to me, but I think that if for some reason OpenPhil were forced to have inseparable branding from GiveWell, they would be making the same recommendations.
I downvoted because this feels overly smug to me. I think it’s a legitimate issue, but GiveWell has made many arguments for why they do what they do, and OpenPhil has made some progress on figuring out how to evaluate AI organizations. Sure, many fields might very well be vastly more fruitful, but they also might not. How do we know which ones?
Can you say more about the perceived smugness? It seems to me like a straightforward account of the obvious limitation to GiveWell’s scope. I only didn’t upvote because it seemed too obvious.
To me, the tone came across as “Ho ho ho, look at those stupid GiveWell people who have never heard of the streetlight effect! They’re blinded by their own metrics and can’t even see how awesome MIRI is!” when there’s no interaction or acknowledgement with (a) materials from GiveWell that address the streetlight effect argument, (b) OpenPhil, or (c) how to actually start to resolve the problem (or even that the problem is particularly hard).
I don’t want to have a high demand for rigor, especially for Discussion-type posts—for me, it’s more about the lack of humility.
For example, this part:
It is not obvious from the article whether the author even checked that actually no charity evaluated by GiveWell was of this type. For all we know, the author checked they didn’t get into the top 7 list. But there is a possibility that GiveWell actually gave some consideration to them, only to conclude that none of those charities belong to the top list.
That is, the articles feels as if the author automatically concluded that GiveWell is stupid, and didn’t even bother to verify his assumption, only used the top 7 list as an evidence. To convince me otherwise, it would help to quote some text from GiveWell website confirming this. Because I think it is likely that GiveWell considered this topic explicitly, and published their conclusion, whatever it was.
Footnote: https://en.wikipedia.org/wiki/Streetlight_effect
I think this is a good point, although I think that a Givewell-like site theoretically could compare charities in a particular domain in which outcomes aren’t easily measurable. Just because things aren’t easily measurable doesn’t mean that they are unmeasurable.
You can measure in principle everything but the uncertainty about the impact of MIRI and AMF is very different.
Also—at least for most research and development efforts—quite a bit is being spent on them by interested parties, so marginal dollars will not be going to something severely underserved.
This argument does not apply to MIRI.
By that logic, wouldn’t it make the most sense to donate to an organization that lobbies for more international aid or scientific research than attempting to fund it yourself?