There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
In any case, I was using animal charities only because I’m more familiar with the relevant estimates of cost-effectiveness. On the plausible assumption that such charities are not many orders of magnitude more cost-effective than the most cost-effective human charity, the argument should work for human charities, too.
There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
How about in-group affiliation with members of your own species?
Do you really believe that, when a creature suffers intensely, your reasons for relieving this creature’s suffering derive from the fact that you share a particular genotype with this creature? If you were later told that a being whom you thought belonged to your species actually belongs to a different species, or to no species at all (a sim), would you suddenly lose all reason to help her?
Really? Is it because you find the sequence of genes that define the human genome aesthetically pleasing? If science uncovered that some of the folks we now think are humans actually belonged to a different species, would you start eating them? If you became persuaded that you are living in a simulation, would you feel it’s okay to kill these seemingly human sims?
Edit (2014-07-16): Upon re-reading this and some of my other comments in this thread, I realize my tone was unnecessarily combative and my interpretation of Qiaochu’s arguments somewhat uncharitable. I apologize for this.
I disagree with the general attitude that my moral values need to cover all possible edge cases to be applicable in practice. I don’t know about you, but in practice, I’m pretty good at distinguishing humans from nonhumans, and I observe that my brain seems to care substantially more about the suffering of the former than the latter.
And my brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment. When I dereference the pointer “humans” I get “y’know, like your mother, your father, your friends, hobos, starving children in Africa...” It points to the same things whether or not I later learn that half of those people are actually Homo neanderthalensis or we all live in a simulation. If I learn that I’ve always lived in a simulation, then the things I refer to as humans have always been things living in a simulation, so those are the things I value.
So I think either this constitutes a counterexample to your meta-argument or we should both taboo “human.”
I don’t understand your reply. In your earlier message you said that you value humans just because they are members of a certain species. But in your most recent message you say that your “brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment.” Yet in order to know whether a being is a member of the species Homo sapiens, and hence whether he or she is a being that you morally value, you need to know whether his or her DNA contains a particular nucleotide sequence.
Earlier you also ask whether there are LW posts defending the idea that we should be assigning significant moral weight to non-human animals. The reaction to my earlier comment might provide a partial explanation why there are no such posts. When one makes a serious attempt to raise a few questions intended to highlight the absurd implications of a position defended by many members of this community, but universally rejected by the community of moral philosophers, folks here react by downvoting the corresponding comment and abstaining from actually answering those questions.
Still, I appreciate your effort to at least try to clarify what your moral views are.
Like, man, I sure do like drinking water! Does what I like about it have anything to do with its being H20? Well, not really, so it wouldn’t be fair to say that the intension of “water” in my claim to like it is H20.
What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
he should just tell us what he really means
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
There is a very simple meta-argument: whatever your argument is for giving value to humans, it will also be strong enough to show that some non-human are also valuable, due to a partial overlap between humans and non-humans in all the properties you might credibly regard as morally relevant.
In any case, I was using animal charities only because I’m more familiar with the relevant estimates of cost-effectiveness. On the plausible assumption that such charities are not many orders of magnitude more cost-effective than the most cost-effective human charity, the argument should work for human charities, too.
How about in-group affiliation with members of your own species?
Do you really believe that, when a creature suffers intensely, your reasons for relieving this creature’s suffering derive from the fact that you share a particular genotype with this creature? If you were later told that a being whom you thought belonged to your species actually belongs to a different species, or to no species at all (a sim), would you suddenly lose all reason to help her?
I don’t, but I don’t dismiss the possibility that other people may; I’ve certainly known people who asserted such.
My argument for valuing humans is that they are human.
Really? Is it because you find the sequence of genes that define the human genome aesthetically pleasing? If science uncovered that some of the folks we now think are humans actually belonged to a different species, would you start eating them? If you became persuaded that you are living in a simulation, would you feel it’s okay to kill these seemingly human sims?
Edit (2014-07-16): Upon re-reading this and some of my other comments in this thread, I realize my tone was unnecessarily combative and my interpretation of Qiaochu’s arguments somewhat uncharitable. I apologize for this.
I disagree with the general attitude that my moral values need to cover all possible edge cases to be applicable in practice. I don’t know about you, but in practice, I’m pretty good at distinguishing humans from nonhumans, and I observe that my brain seems to care substantially more about the suffering of the former than the latter.
And my brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment. When I dereference the pointer “humans” I get “y’know, like your mother, your father, your friends, hobos, starving children in Africa...” It points to the same things whether or not I later learn that half of those people are actually Homo neanderthalensis or we all live in a simulation. If I learn that I’ve always lived in a simulation, then the things I refer to as humans have always been things living in a simulation, so those are the things I value.
So I think either this constitutes a counterexample to your meta-argument or we should both taboo “human.”
I don’t understand your reply. In your earlier message you said that you value humans just because they are members of a certain species. But in your most recent message you say that your “brain’s classification of things into humans and nonhumans isn’t based on nucleotide sequences or anything that brains like mine wouldn’t have had access to in the ancestral environment.” Yet in order to know whether a being is a member of the species Homo sapiens, and hence whether he or she is a being that you morally value, you need to know whether his or her DNA contains a particular nucleotide sequence.
Earlier you also ask whether there are LW posts defending the idea that we should be assigning significant moral weight to non-human animals. The reaction to my earlier comment might provide a partial explanation why there are no such posts. When one makes a serious attempt to raise a few questions intended to highlight the absurd implications of a position defended by many members of this community, but universally rejected by the community of moral philosophers, folks here react by downvoting the corresponding comment and abstaining from actually answering those questions.
Still, I appreciate your effort to at least try to clarify what your moral views are.
This strikes me as a “is XYZ water” thing.
Like, man, I sure do like drinking water! Does what I like about it have anything to do with its being H20? Well, not really, so it wouldn’t be fair to say that the intension of “water” in my claim to like it is H20.
What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
I guess so; I guess so; I guess so; and I guess so.
You are trying through argument to cause a person to care about something they do not currently care about. This seems difficult in general.
It was Qiaochu who initially asked for arguments for caring about non-human animals.