The strength of the economic argument for giving only to your top charity is proportional to the difference between it and your next choice. If the difference is small enough and you find it painful to pick only one it’s just not worth it: give to both.
According to Brian Tomasik’s estimates, a dollar donated to the most cost-effective animal charity is expected to prevent between 100 days and 51 years of suffering on a factory farm. Even if you think this charity is only 5% more effective than your next choice, donating to this charity would alleviate between 5 days and 2.55 years of suffering more than would donating to the second best charity. On a very modest donation of, say, $200 per year, the difference amounts to between ~3 and ~500 years of suffering. In light of these figures, it doesn’t seem that the fact that “you find it painful to pick only one” charity is, in itself, a good reason to pick both.
On a very modest donation of, say, $200 per year, the difference amounts to between ~3
and ~500 years of suffering. It doesn’t seem that the fact that “you find it painful to
pick only one” charity is, in itself, a good reason to pick both
If I’m giving $200/year there are lots of options I could take to improve my impact:
I could spend less on myself so I can give more.
I could earn more so I could give more.
I could put more time into choosing the most effective charity.
I could limit my donations to only my top charity, even when I think other charities are almost as good.
All of these are painful to myself but have benefits to others, so to maximize
my positive impact I should prioritize them based on the ratio of self-pain to other-beneft.
What I’m claiming here is that the last option has a poor ratio, for charities that are
close enough together in impact.
Not directly relevant, but is there a LW post or other resource of similar or greater caliber defending the idea that we should be assigning significant moral weight to non-human animals?
There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
In any case, I was using animal charities only because I’m more familiar with the relevant estimates of cost-effectiveness. On the plausible assumption that such charities are not many orders of magnitude more cost-effective than the most cost-effective human charity, the argument should work for human charities, too.
There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
How about in-group affiliation with members of your own species?
Do you really believe that, when a creature suffers intensely, your reasons for relieving this creature’s suffering derive from the fact that you share a particular genotype with this creature? If you were later told that a being whom you thought belonged to your species actually belongs to a different species, or to no species at all (a sim), would you suddenly lose all reason to help her?
Really? Is it because you find the sequence of genes that define the human genome aesthetically pleasing? If science uncovered that some of the folks we now think are humans actually belonged to a different species, would you start eating them? If you became persuaded that you are living in a simulation, would you feel it’s okay to kill these seemingly human sims?
Edit (2014-07-16): Upon re-reading this and some of my other comments in this thread, I realize my tone was unnecessarily combative and my interpretation of Qiaochu’s arguments somewhat uncharitable. I apologize for this.
I disagree with the general attitude that my moral values need to cover all possible edge cases to be applicable in practice. I don’t know about you, but in practice, I’m pretty good at distinguishing humans from nonhumans, and I observe that my brain seems to care substantially more about the suffering of the former than the latter.
And my brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment. When I dereference the pointer “humans” I get “y’know, like your mother, your father, your friends, hobos, starving children in Africa...” It points to the same things whether or not I later learn that half of those people are actually Homo neanderthalensis or we all live in a simulation. If I learn that I’ve always lived in a simulation, then the things I refer to as humans have always been things living in a simulation, so those are the things I value.
So I think either this constitutes a counterexample to your meta-argument or we should both taboo “human.”
I don’t understand your reply. In your earlier message you said that you value humans just because they are members of a certain species. But in your most recent message you say that your “brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment.” Yet in order to know whether a being is a member of the species Homo sapiens, and hence whether he or she is a being that you morally value, you need to know whether his or her DNA contains a particular nucleotide sequence.
Earlier you also ask whether there are LW posts defending the idea that we should be assigning significant moral weight to non-human animals. The reaction to my earlier comment might provide a partial explanation why there are no such posts. When one makes a serious attempt to raise a few questions intended to highlight the absurd implications of a position defended by many members of this community, but universally rejected by the community of moral philosophers, folks here react by downvoting the corresponding comment and abstaining from actually answering those questions.
Still, I appreciate your effort to at least try to clarify what your moral views are.
Like, man, I sure do like drinking water! Does what I like about it have anything to do with its being H20? Well, not really, so it wouldn’t be fair to say that the intension of “water” in my claim to like it is H20.
What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
he should just tell us what he really means
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
I don’t know of a good argument for that position, but there’s good evidence that some of the universal emotions discussed in CFAR’s “Emotional API” unit (namely SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF and PLAY) are experienced by nonhuman mammals. That fact might cause one to care more about animals.
Good question, how would one do this consistently? If you value agency/intelligence, you have to develop a metric which does not lead to stupid results, like having your utility function being overwhelmed by insects and bacteria, due to their sheer numbers. Of course, one can always go by cuteness.
Is that necessarily stupid? Obviously it is if you only value agency/intelligence, and it’s an empirical question whether insects and bacteria have the other characteristics you may care about, but given that it is an empirical question the only acceptable response seems to be to shut up and calculate.
Also not directly relevant, but is there any argument opposed to prioritizing animals of higher intelligence and “capacity for suffering”, such as primates and cetaceans?
Personally, while I assign negative utility to animals suffering in factory farms, I adjust for the mental capacity of the animals in question (in broad terms “how much do I care about this animal’s suffering relative to a human’s?”) and in many cases this is the controlling factor of the calculation. If I were deciding between charities which prevented human suffering on that order, clearly the difference between top charities would outweigh the magnitude of my suffering, but when the animals in question are mostly chickens, it’s not clear to me that this is still the case. I discount tremendously on the suffering of a creature capable of this relative to humans.
I wasn’t arguing that you should donate to non-human animal charities. I was arguing that if you do donate to non-human animal charities, you should donate solely to the most cost-effective such charity, even if you would get more fuzzies by splitting your donation between two or more charities. I was also implicitly suggesting that if you believe that non-human animal human charities are comparably cost-effective, the argument generalizes to human charities, too. Discounting the suffering of non-human animals only serves to strengthen my argument, since it decreases the cost-effectivenss of non-human animal charities relative to that of human charities.
Discounting the suffering of non-human animals only serves to strengthen my argument, since it decreases the cost-effectivenss of non-human animal charities relative to that of human charities.
In that case, I’m not sure what your original argument was.
The painfulness of the decision is also a form of disutility that has to be balanced against the difference between the charities though, which was the point of my original comment. If the difference between the values of the donations, when adjusted for the species involved, is less utility than the amount you personally lose from agonizing over how to apportion your donation, splitting it may result in higher utility overall.
Obviously, this is heavily dependent on how large the utility differences between the top charities are; if it weren’t, my comment about discounting the suffering of less intelligent species wouldn’t have been relevant.
According to Brian Tomasik’s estimates, a dollar donated to the most cost-effective animal charity is expected to prevent between 100 days and 51 years of suffering on a factory farm. Even if you think this charity is only 5% more effective than your next choice, donating to this charity would alleviate between 5 days and 2.55 years of suffering more than would donating to the second best charity. On a very modest donation of, say, $200 per year, the difference amounts to between ~3 and ~500 years of suffering. In light of these figures, it doesn’t seem that the fact that “you find it painful to pick only one” charity is, in itself, a good reason to pick both.
If I’m giving $200/year there are lots of options I could take to improve my impact:
I could spend less on myself so I can give more.
I could earn more so I could give more.
I could put more time into choosing the most effective charity.
I could limit my donations to only my top charity, even when I think other charities are almost as good.
All of these are painful to myself but have benefits to others, so to maximize my positive impact I should prioritize them based on the ratio of self-pain to other-beneft. What I’m claiming here is that the last option has a poor ratio, for charities that are close enough together in impact.
Not directly relevant, but is there a LW post or other resource of similar or greater caliber defending the idea that we should be assigning significant moral weight to non-human animals?
There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
In any case, I was using animal charities only because I’m more familiar with the relevant estimates of cost-effectiveness. On the plausible assumption that such charities are not many orders of magnitude more cost-effective than the most cost-effective human charity, the argument should work for human charities, too.
How about in-group affiliation with members of your own species?
Do you really believe that, when a creature suffers intensely, your reasons for relieving this creature’s suffering derive from the fact that you share a particular genotype with this creature? If you were later told that a being whom you thought belonged to your species actually belongs to a different species, or to no species at all (a sim), would you suddenly lose all reason to help her?
I don’t, but I don’t dismiss the possibility that other people may; I’ve certainly known people who asserted such.
My argument for valuing humans is that they are human.
Really? Is it because you find the sequence of genes that define the human genome aesthetically pleasing? If science uncovered that some of the folks we now think are humans actually belonged to a different species, would you start eating them? If you became persuaded that you are living in a simulation, would you feel it’s okay to kill these seemingly human sims?
Edit (2014-07-16): Upon re-reading this and some of my other comments in this thread, I realize my tone was unnecessarily combative and my interpretation of Qiaochu’s arguments somewhat uncharitable. I apologize for this.
I disagree with the general attitude that my moral values need to cover all possible edge cases to be applicable in practice. I don’t know about you, but in practice, I’m pretty good at distinguishing humans from nonhumans, and I observe that my brain seems to care substantially more about the suffering of the former than the latter.
And my brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment. When I dereference the pointer “humans” I get “y’know, like your mother, your father, your friends, hobos, starving children in Africa...” It points to the same things whether or not I later learn that half of those people are actually Homo neanderthalensis or we all live in a simulation. If I learn that I’ve always lived in a simulation, then the things I refer to as humans have always been things living in a simulation, so those are the things I value.
So I think either this constitutes a counterexample to your meta-argument or we should both taboo “human.”
I don’t understand your reply. In your earlier message you said that you value humans just because they are members of a certain species. But in your most recent message you say that your “brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment.” Yet in order to know whether a being is a member of the species Homo sapiens, and hence whether he or she is a being that you morally value, you need to know whether his or her DNA contains a particular nucleotide sequence.
Earlier you also ask whether there are LW posts defending the idea that we should be assigning significant moral weight to non-human animals. The reaction to my earlier comment might provide a partial explanation why there are no such posts. When one makes a serious attempt to raise a few questions intended to highlight the absurd implications of a position defended by many members of this community, but universally rejected by the community of moral philosophers, folks here react by downvoting the corresponding comment and abstaining from actually answering those questions.
Still, I appreciate your effort to at least try to clarify what your moral views are.
This strikes me as a “is XYZ water” thing.
Like, man, I sure do like drinking water! Does what I like about it have anything to do with its being H20? Well, not really, so it wouldn’t be fair to say that the intension of “water” in my claim to like it is H20.
What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
I guess so; I guess so; I guess so; and I guess so.
You are trying through argument to cause a person to care about something they do not currently care about. This seems difficult in general.
It was Qiaochu who initially asked for arguments for caring about non-human animals.
I don’t know of a good argument for that position, but there’s good evidence that some of the universal emotions discussed in CFAR’s “Emotional API” unit (namely SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF and PLAY) are experienced by nonhuman mammals. That fact might cause one to care more about animals.
Good question, how would one do this consistently? If you value agency/intelligence, you have to develop a metric which does not lead to stupid results, like having your utility function being overwhelmed by insects and bacteria, due to their sheer numbers. Of course, one can always go by cuteness.
Is that necessarily stupid? Obviously it is if you only value agency/intelligence, and it’s an empirical question whether insects and bacteria have the other characteristics you may care about, but given that it is an empirical question the only acceptable response seems to be to shut up and calculate.
Also not directly relevant, but is there any argument opposed to prioritizing animals of higher intelligence and “capacity for suffering”, such as primates and cetaceans?
Personally, while I assign negative utility to animals suffering in factory farms, I adjust for the mental capacity of the animals in question (in broad terms “how much do I care about this animal’s suffering relative to a human’s?”) and in many cases this is the controlling factor of the calculation. If I were deciding between charities which prevented human suffering on that order, clearly the difference between top charities would outweigh the magnitude of my suffering, but when the animals in question are mostly chickens, it’s not clear to me that this is still the case. I discount tremendously on the suffering of a creature capable of this relative to humans.
I wasn’t arguing that you should donate to non-human animal charities. I was arguing that if you do donate to non-human animal charities, you should donate solely to the most cost-effective such charity, even if you would get more fuzzies by splitting your donation between two or more charities. I was also implicitly suggesting that if you believe that non-human animal human charities are comparably cost-effective, the argument generalizes to human charities, too. Discounting the suffering of non-human animals only serves to strengthen my argument, since it decreases the cost-effectivenss of non-human animal charities relative to that of human charities.
In that case, I’m not sure what your original argument was.
The argument was explained in the sentences immediately preceding the one you quoted.
The painfulness of the decision is also a form of disutility that has to be balanced against the difference between the charities though, which was the point of my original comment. If the difference between the values of the donations, when adjusted for the species involved, is less utility than the amount you personally lose from agonizing over how to apportion your donation, splitting it may result in higher utility overall.
Obviously, this is heavily dependent on how large the utility differences between the top charities are; if it weren’t, my comment about discounting the suffering of less intelligent species wouldn’t have been relevant.