The essence of EA is that people are equal, regardless of location. In other words, you’d rather give money to poor people in far away countries than people in your own country if it’s more effective, even though the latter feel intuitively more close to you. People care more about their own countries’ citizens even though they may not even know them. Often your own country’s citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your own country’s citizens. There are obviously examples of this kind of thinking concretely affecting people’s actions. In the Congo Crisis (1960–1966) when the rebels started taking white hostages, there was an almost immediate military operation conducted by the United States and Belgium and the American and European civilians of this area were quickly evacuated. Otherwise this crisis was mostly ignored by western powers and the UN operation was much more low key than the rescue operation.
In Effective Altruism, should how much you intuitively care about other people be a factor in how much you allocate resources to them?
Can you take this kind of thinking to its logical conclusion: you shouldn’t allocate any money or resources to people that you feel are close to you, like your family or friends because you can more effectively minimize suffering by allocating those resources to far-away people?
Note, I’m not criticizing effective altruism or actually supporting this kind of thinking. I’m just playing a devil’s advocate.
A possible counterargument: one’s family and friends are essential to one’s mental well-being and you can be a better effective altruist if you support your friends and family.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don’t get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
“I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy.” So, why don’t you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. “Because I care about my neighbors more. They are… uhm… my tribe.” So you also support your tribe. That’s not completely selfless. “That’s a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what’s your point?”
I guess my point is, if your goal is to support your tribe, just be honest about it. Take a part of your budget and think about the most efficient way of supporting your tribe. And then take another part of your budget and spend it on effective altruism. (The proportion of these two parts, that’s your choice.) You will be helping people selflessly and supporting your tribe, probably getting more points on each scale than you are getting now.
“But I also want a recognition of my tribe for my support. They will reward me socially for helping in-tribes, but will care less about me helping out-tribes.” Oh, well. That’s even less selfless. I am not judging you here, just suggesting to make another sub-budget for maximizing your prestige within the tribe and optimize for that goal separately.
“Because that’s too complicated. Too many budgets, too much optimization.” Yeah, you have a point.
Also, if it turns out that I have three sub-budgets as you describe here (X, Y, Z) and there exist three acts (Ax, Ay, Az) which are optimal for each budget, but there exists a fourth act B which is just-barely-suboptimal in all three, it may turn out that B is the optimal thing for me to do despite not being optimal for any of the sub-budgets. So optimizing each budget separately might not be the best plan.
Generally, you are right. But in effective altruism, the axis “helping other people” is estimated to do hundred times more good if you use a separate budget for it.
This may be suboptimal for the other axes, though. Taking the pledge and having your name on the list could help along the “signalling philantropy” axis.
Expanding on this, isn’t there an aspect of purchasing fuzzies in the usual form of effective altruism? I know there’s been a lot of talk of vegetarianism and animal-welfare on LW, but there’s something in it that’s related to this issue.
At least some people believe it’s been pretty conclusively proven that mammals and some avians have a subjective experience and the ability to suffer, in the same way humans have. In this way humans, mammals, and those avian species are equal—they have roughly the same capacity to suffer. Also, with over 50 billion animals used to produce food and other commodities every year, one could argue that the scope of suffering in this sphere in greater than in the human kind.
So let’s assume that the animals used in the livestock have an equal ability to suffer when compared to humans. Let’s assume that the scope of suffering is greater in the livestock industry than in the human kind. Let’s also assume that we can more easily reduce this suffering than the suffering of humans. I don’t think it’s a stretch to say that these three assumptions could actually be true and this post analyzed these factors in more detail. From these assumptions, we should conclude not only that we should become vegetarians, like this post argues, but also that the animal welfare should be our top priority. It is our moral imperative to allocate all the resources we dedicate to buying utilons to animal welfare, until the marginal utility for it is lower than for human welfare.
Again, just playing a devil’s advocate. Are there other reasons to help humans other than the fact they belong to our tribe more than animals? The counterarguments raised in this post by RobbBB are very relevant, especially 3. and 4. Maybe animals don’t actually have the subjective experience of suffering and what we think as suffering is only damage-avoiding and damage-signaling behavior. Maybe sapience makes true suffering possible in humans and that’s why animals can’t truly suffer on the same level as humans.
I had this horrible picture of a future where human-utilons-maximizing altruists distribute nets against mosquitoes as the most cost-efficient tool to reduce the human suffering, and the animal-utilons-maximizing altruists sabotage the net production as the most cost-efficient tool to reduce the mosquito suffering...
That’s a worthwhile concern, but I personally wouldn’t make the distinction between animal-utilons and human-utilons. I would just try to maximize utilons for conscious beings in general. Pigs, cows, chicken and other farm animals belong in that category, mosquitoes, insects and jellyfish don’t. That’s also why I think eating insects is on par with vegetarianism because you’re not really hurting any conscious beings.
Since we’re playing the devil’s advocate here: much more important than geographical and cultural proximity to me would be how many values I share with these people I’m helping, were I ever to come in even remote contact with them or their offspring.
Would you effective altruist people donate mosquito nets to baby eating aliens if it cost effectively relieved their suffering? If not, where do you draw the line in value divergence? Human?
I have a question about Effective Altruism:
The essence of EA is that people are equal, regardless of location. In other words, you’d rather give money to poor people in far away countries than people in your own country if it’s more effective, even though the latter feel intuitively more close to you. People care more about their own countries’ citizens even though they may not even know them. Often your own country’s citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your own country’s citizens. There are obviously examples of this kind of thinking concretely affecting people’s actions. In the Congo Crisis (1960–1966) when the rebels started taking white hostages, there was an almost immediate military operation conducted by the United States and Belgium and the American and European civilians of this area were quickly evacuated. Otherwise this crisis was mostly ignored by western powers and the UN operation was much more low key than the rescue operation.
In Effective Altruism, should how much you intuitively care about other people be a factor in how much you allocate resources to them?
Can you take this kind of thinking to its logical conclusion: you shouldn’t allocate any money or resources to people that you feel are close to you, like your family or friends because you can more effectively minimize suffering by allocating those resources to far-away people?
Note, I’m not criticizing effective altruism or actually supporting this kind of thinking. I’m just playing a devil’s advocate.
A possible counterargument: one’s family and friends are essential to one’s mental well-being and you can be a better effective altruist if you support your friends and family.
Maybe it is a problem of puchasing fuzzies and utilons together, and also being hypocritical about it.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don’t get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
“I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy.” So, why don’t you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. “Because I care about my neighbors more. They are… uhm… my tribe.” So you also support your tribe. That’s not completely selfless. “That’s a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what’s your point?”
I guess my point is, if your goal is to support your tribe, just be honest about it. Take a part of your budget and think about the most efficient way of supporting your tribe. And then take another part of your budget and spend it on effective altruism. (The proportion of these two parts, that’s your choice.) You will be helping people selflessly and supporting your tribe, probably getting more points on each scale than you are getting now.
“But I also want a recognition of my tribe for my support. They will reward me socially for helping in-tribes, but will care less about me helping out-tribes.” Oh, well. That’s even less selfless. I am not judging you here, just suggesting to make another sub-budget for maximizing your prestige within the tribe and optimize for that goal separately.
“Because that’s too complicated. Too many budgets, too much optimization.” Yeah, you have a point.
Also, if it turns out that I have three sub-budgets as you describe here (X, Y, Z) and there exist three acts (Ax, Ay, Az) which are optimal for each budget, but there exists a fourth act B which is just-barely-suboptimal in all three, it may turn out that B is the optimal thing for me to do despite not being optimal for any of the sub-budgets. So optimizing each budget separately might not be the best plan.
Then again, it might.
Generally, you are right. But in effective altruism, the axis “helping other people” is estimated to do hundred times more good if you use a separate budget for it.
This may be suboptimal for the other axes, though. Taking the pledge and having your name on the list could help along the “signalling philantropy” axis.
Fair point.
Expanding on this, isn’t there an aspect of purchasing fuzzies in the usual form of effective altruism? I know there’s been a lot of talk of vegetarianism and animal-welfare on LW, but there’s something in it that’s related to this issue.
At least some people believe it’s been pretty conclusively proven that mammals and some avians have a subjective experience and the ability to suffer, in the same way humans have. In this way humans, mammals, and those avian species are equal—they have roughly the same capacity to suffer. Also, with over 50 billion animals used to produce food and other commodities every year, one could argue that the scope of suffering in this sphere in greater than in the human kind.
So let’s assume that the animals used in the livestock have an equal ability to suffer when compared to humans. Let’s assume that the scope of suffering is greater in the livestock industry than in the human kind. Let’s also assume that we can more easily reduce this suffering than the suffering of humans. I don’t think it’s a stretch to say that these three assumptions could actually be true and this post analyzed these factors in more detail. From these assumptions, we should conclude not only that we should become vegetarians, like this post argues, but also that the animal welfare should be our top priority. It is our moral imperative to allocate all the resources we dedicate to buying utilons to animal welfare, until the marginal utility for it is lower than for human welfare.
Again, just playing a devil’s advocate. Are there other reasons to help humans other than the fact they belong to our tribe more than animals? The counterarguments raised in this post by RobbBB are very relevant, especially 3. and 4. Maybe animals don’t actually have the subjective experience of suffering and what we think as suffering is only damage-avoiding and damage-signaling behavior. Maybe sapience makes true suffering possible in humans and that’s why animals can’t truly suffer on the same level as humans.
I had this horrible picture of a future where human-utilons-maximizing altruists distribute nets against mosquitoes as the most cost-efficient tool to reduce the human suffering, and the animal-utilons-maximizing altruists sabotage the net production as the most cost-efficient tool to reduce the mosquito suffering...
That’s a worthwhile concern, but I personally wouldn’t make the distinction between animal-utilons and human-utilons. I would just try to maximize utilons for conscious beings in general. Pigs, cows, chicken and other farm animals belong in that category, mosquitoes, insects and jellyfish don’t. That’s also why I think eating insects is on par with vegetarianism because you’re not really hurting any conscious beings.
Since we’re playing the devil’s advocate here: much more important than geographical and cultural proximity to me would be how many values I share with these people I’m helping, were I ever to come in even remote contact with them or their offspring.
Would you effective altruist people donate mosquito nets to baby eating aliens if it cost effectively relieved their suffering? If not, where do you draw the line in value divergence? Human?