EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
The consequentialist issue could be addressed by the assumption that if only people’s needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don’t see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
This matches research showing that there are “sacred values”, like human lives, and “unsacred values”, like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).
Lives or QALYs are the only things on the EA table at present.
How do you come to that conclusion?
When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants is mostly directed to the same sort of thing.
On global threats:
When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
How would it not be? Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere, so there isn’t an issue there of selecting who to save first. Protective measures save everyone.
I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention.
Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects.
GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based.
Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere
You said only lives and QALYs. I’m not disputing that it also effects lives and QALYs. I’m disputing that’s the only thing you get from it.
it is not something that logically follows from the basic idea of directing resources wherever they can do the most good
It depends on how do you define “good”. In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.
The consequentialist issue could be addressed by the assumption that if only people’s needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
That sounds obviously false on its face.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don’t see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
That’s unfortunate. There can be no sacred values. That way lies madness.
Nevertheless:
-- Circular Altruism
Well...
How do you come to that conclusion? When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants is mostly directed to the same sort of thing.
On global threats:
How would it not be? Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere, so there isn’t an issue there of selecting who to save first. Protective measures save everyone.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention.
Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects.
GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based.
You said only lives and QALYs. I’m not disputing that it also effects lives and QALYs. I’m disputing that’s the only thing you get from it.
Well, what measure are they using?
I don’t think there’s a single measure. There rather an attempt to understand all the effects of an intervention as best as possible.
It depends on how do you define “good”. In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.