We could look at donors’ public materials, for example evaluation requirements listed in grant applications. We could examine the programs of conferences or workshops on philanthropy and see how often this topic is discussed. We could investigate the reports and research literature on this topic. But I don’t know how to define enough concern.
Any metric can be gamed or can distort behavior, it’s true. No metric can substitute for judgment.
Re programmatic evaluation: It’s true that nonprofits *can* do this, but that only matters if *donors* on the whole care. This is why I said:
My sense is that donors do care about evaluation, on the whole. It’s not just GiveWell / Open Philanthropy / EA who think about this :P
See for example https://www.rockpa.org/guide/assessing-impact/
My sense is that they don’t care nearly enough.
How could we find evidence one way or another?
We could look at donors’ public materials, for example evaluation requirements listed in grant applications. We could examine the programs of conferences or workshops on philanthropy and see how often this topic is discussed. We could investigate the reports and research literature on this topic. But I don’t know how to define enough concern.