A strange opinion article in The Guardian today: it is not entirely clear whether the authors object to a concern with effectiveness, or just think that “assessing the short-term impacts of micro-projects” is somehow misguided (and if so, why that is).
The article starts of in a way that seems to either extremely clueless or purposefully misleading. I have a heard time seeing how someone can say that Global Proverty is intractable in a paragraph that speaks about failing Millennium goals in the same paragraph when the Millennium goal of halving the amount of people who live on less than a dollar a day between 2000 and 2015 was successful. The goal of halving the amount of undernourished people was also successful.
The article should go into the fake news bin even when it might be possible to argue the case in a decent way.
The text seems pretty clear on both these questions.
...
The problems with choosing interventions based on how well they are measured to do are similar to the problems faced by model-free reinforcement learning algorithms (such as: necessity to collect lots of high-quality data, the costs of exploration, local maxima that could be avoided with better models, Goodhart’s law, lack of human understanding of the underlying phenomena, problems with learning long-term dependencies, use of CDT or EDT as a decision theory), because the process of choosing interventions based only on how well they are measured to do is literally a model-free reinforcement learning algorithm.
One thing the article unfortunately fails to acknowledge is that observational data is often insufficient to infer causality, and RCTs can help here.