This post has some faults, but it correctly points out the narrowness of currently EA thinking.
The problem with effective altruism is that it depends on values, and values are hard. Values are also notoriously gameable by politics. Currently, EA is Afrocentric and only effective for a very narrow value system.
EA is focused on saving the max number of lives in the present, or giving directly to the poorest areas. This approach is beneficial for those people, but it’s not clear that this approach has a large impact on the future of humanity. It also seems very near-mode.
GiveWell claims that there are flow-through effects of charity, such as greater economic development, but these are underspecified.
Science, technology, medicine, and economic development has had a large positive impact on humanity. The style of EA that appeals to me would focus on promoting those things. Existential risk reduction also has appeal. Current EA claims to benefit economic development, but it’s not clear that it’s the best way to do that. And current EA seems weak for promoting science, technology, and medicine.
Most scientific, medical, and technological advances have come from the West (and Asia). If we want to see more of those advances, then shouldn’t we be investing capital in the places with a historical track record of accomplishment?
If you are approaching EA with the attitude of an investor in the future of humanity, then you must also consider national differences in IQ and the correlation of intelligence with per capita income. An investor with a blank slate attitude will be sorely disappointed, because many areas will likely hit a wall in accomplishment.
The current EA approach seems to focus on aid over investment. From a redistributive standpoint, helping the most needy makes sense. Yet from an investment standpoint, helping the most productive makes more sense, even if the bang for your buck is less. Since the most productive people are typically less needy, these two approaches come to diametrically opposite conclusions. There is also a potential conflict between X-risk reduction and technological progress. This underscores how values are hard, and the tensions between different potential value systems in EA.
Yet perhaps there is a way to reconcile the aid and investment approaches: find a place in the world that has poverty or other problems but is high in human capital, and invest there. Is there really no such place in the world like this?
EA’s current research seems focus on need-based, accomplishment-blind aid, but this only satisfies a narrow range of the values that EA could represent. It is curious that all major recommended EA interventions seem politically appealing, and there have been no major EA interventions proposed (to my knowledge) that are politically incorrect. Yes, EA has recommend avoiding certain popular interventions, but only in order to get better results within the same progressive value system.
We live in a very convenient world if helping humanity involves doing things that just happen to make people look good in Bay Area parties and the media in 2015. I am concerned that there is a file drawer effect for potential EA approaches that are politically awkward.
We live in a very convenient world if helping humanity involves doing things that just happen to make people look good in Bay Area parties and the media in 2015.
Would it be a more or less convenient world, if helping humanity involved giving money to rich and smart people living in the Bay Area? (Which is what your solution seems to suggest.)
My intuition is that if you want to see more good stuff happen, then maybe we should be giving some resources to the kinds of people who have made good stuff happen historically, and make sure we are getting a return on investment. I do not think all these people are located in the Bay Area, and my previous post does suggest trying to find poor people who are likely to be highly productive.
This post has some faults, but it correctly points out the narrowness of currently EA thinking.
The problem with effective altruism is that it depends on values, and values are hard. Values are also notoriously gameable by politics. Currently, EA is Afrocentric and only effective for a very narrow value system.
EA is focused on saving the max number of lives in the present, or giving directly to the poorest areas. This approach is beneficial for those people, but it’s not clear that this approach has a large impact on the future of humanity. It also seems very near-mode.
GiveWell claims that there are flow-through effects of charity, such as greater economic development, but these are underspecified.
Science, technology, medicine, and economic development has had a large positive impact on humanity. The style of EA that appeals to me would focus on promoting those things. Existential risk reduction also has appeal. Current EA claims to benefit economic development, but it’s not clear that it’s the best way to do that. And current EA seems weak for promoting science, technology, and medicine.
Most scientific, medical, and technological advances have come from the West (and Asia). If we want to see more of those advances, then shouldn’t we be investing capital in the places with a historical track record of accomplishment?
If you are approaching EA with the attitude of an investor in the future of humanity, then you must also consider national differences in IQ and the correlation of intelligence with per capita income. An investor with a blank slate attitude will be sorely disappointed, because many areas will likely hit a wall in accomplishment.
The current EA approach seems to focus on aid over investment. From a redistributive standpoint, helping the most needy makes sense. Yet from an investment standpoint, helping the most productive makes more sense, even if the bang for your buck is less. Since the most productive people are typically less needy, these two approaches come to diametrically opposite conclusions. There is also a potential conflict between X-risk reduction and technological progress. This underscores how values are hard, and the tensions between different potential value systems in EA.
Yet perhaps there is a way to reconcile the aid and investment approaches: find a place in the world that has poverty or other problems but is high in human capital, and invest there. Is there really no such place in the world like this?
EA’s current research seems focus on need-based, accomplishment-blind aid, but this only satisfies a narrow range of the values that EA could represent. It is curious that all major recommended EA interventions seem politically appealing, and there have been no major EA interventions proposed (to my knowledge) that are politically incorrect. Yes, EA has recommend avoiding certain popular interventions, but only in order to get better results within the same progressive value system.
We live in a very convenient world if helping humanity involves doing things that just happen to make people look good in Bay Area parties and the media in 2015. I am concerned that there is a file drawer effect for potential EA approaches that are politically awkward.
Would it be a more or less convenient world, if helping humanity involved giving money to rich and smart people living in the Bay Area? (Which is what your solution seems to suggest.)
My intuition is that if you want to see more good stuff happen, then maybe we should be giving some resources to the kinds of people who have made good stuff happen historically, and make sure we are getting a return on investment. I do not think all these people are located in the Bay Area, and my previous post does suggest trying to find poor people who are likely to be highly productive.
Flow through effects have not been completely ignored by GiveWell. But their comments on it are much less rigorous and careful than their other work:
http://www.jefftk.com/p/flow-through-effects-conversation
http://blog.givewell.org/2013/05/15/flow-through-effects/