You’re right I think that I overstated EA’s tendency to assume generalisability, particularly when it comes to testing interventions in global health and poverty (though much less so when it comes to research in other cause areas). Eva Vivalt’s interview with 80K, and more recent EA Global sessions discussing the limitations of the randomista approach are examples. Some incubated charity interventions by GiveWell also seemed to take a targeted regional approach (e.g. No Lean Season). Also, Ben Kuhn’s ‘local context plus high standards theory’ for Wave. So point taken!
I still worry about EA-driven field experiments relying too much, too quickly on filtering experimental observations through quantitive metrics exported from Western academia. In their local implementation, these metrics may either not track the aspects we had in mind, or just not reflect what actually exists and/or is relevant to people’s local context there. I haven’t heard yet about EA founders who started out by doing open qualitative fieldwork on the ground (but happy to hear examples!).
I assume generalisability of metrics would be less of a problem for medical interventions like anti-malaria nets and deworming tablets. But here’s an interesting claim I just came across:
One-size-fits-all doesn’t work and the ways medicine affects people varies dramatically.
With schistosomiasis we found that fisherfolk, who are the most likely to be infected, were almost entirely absent from the disease programme and they’re the ones defecating and urinating in the water, spreading the disease.
I don’t know if I’d consider JPAL directly EA, but they at least claim to conduct regular qualitative fieldwork before/after/during their formal interventions (source from Poor Economics, I’ve sadly forgotten the exact point but they mention it several times). Similarly, GiveDirectly regularly meets with program participants for both structured polls and unstructured focus groups if I recall correctly. Regardless, I agree with the concrete point that this is an important thing to do and EA/rationality folks are less inclined to collect unstructured qualitative feedback than its importance deserves.
Interesting, I didn’t know GiveDirectly ran unstructured focus groups, nor that JPAL does qualitative interviews at various stages of testing interventions. Adds a bit more nuance to my thoughts, thanks!
I appreciate your thoughtful comment too, Dan.
You’re right I think that I overstated EA’s tendency to assume generalisability, particularly when it comes to testing interventions in global health and poverty (though much less so when it comes to research in other cause areas). Eva Vivalt’s interview with 80K, and more recent EA Global sessions discussing the limitations of the randomista approach are examples. Some incubated charity interventions by GiveWell also seemed to take a targeted regional approach (e.g. No Lean Season). Also, Ben Kuhn’s ‘local context plus high standards theory’ for Wave. So point taken!
I still worry about EA-driven field experiments relying too much, too quickly on filtering experimental observations through quantitive metrics exported from Western academia. In their local implementation, these metrics may either not track the aspects we had in mind, or just not reflect what actually exists and/or is relevant to people’s local context there. I haven’t heard yet about EA founders who started out by doing open qualitative fieldwork on the ground (but happy to hear examples!).
I assume generalisability of metrics would be less of a problem for medical interventions like anti-malaria nets and deworming tablets. But here’s an interesting claim I just came across:
Fair points!
I don’t know if I’d consider JPAL directly EA, but they at least claim to conduct regular qualitative fieldwork before/after/during their formal interventions (source from Poor Economics, I’ve sadly forgotten the exact point but they mention it several times). Similarly, GiveDirectly regularly meets with program participants for both structured polls and unstructured focus groups if I recall correctly. Regardless, I agree with the concrete point that this is an important thing to do and EA/rationality folks are less inclined to collect unstructured qualitative feedback than its importance deserves.
Interesting, I didn’t know GiveDirectly ran unstructured focus groups, nor that JPAL does qualitative interviews at various stages of testing interventions. Adds a bit more nuance to my thoughts, thanks!
One of GiveDirectly’s blog posts on survey and focus group results, by the way.
https://www.givedirectly.org/what-its-like-to-receive-a-basic-income/