What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
he should just tell us what he really means
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
What you like is the taste of water; if the liquid that you believe is water turns out to have a different molecular structure, you’d still like it as much. This example is illustrative, because it suggests that Qiaochu and others, contrary to what they claim, do not really care whether a creature belongs to a certain species, but only that it has certain characteristics that they associate to that species (sentience, intelligence, or what have you). But if this is what these people believe, (1) they should say so explicitly, and, more importantly, (2) they face the meta-argument I presented above.
I agree with Oligopsony that Qiaochu is not using “human” as a rigid designator. Furthermore, I don’t think it’s safe to assume that their concept of “human” is a simple conjunction or disjunction of simple features. Semantic categories tend to not work like that.
This is not to say that a moral theory can’t judge some features like sentience to be “morally relevant”. But Qiaochu’s moral theory might not, which would explain why your argument was not effective.
If Qiaochu is not using “human” as a rigid designator, then what he cares for is not beings with a certain genome, but beings having certain other properties, such as intelligence, sentience, or those constitutive of the intensions he is relying upon to pick out the object of his moral concern. This was, in fact, what I said in my previous comment. As far as I can see, the original “meta-argument” would then apply to his views, so understood.
(And if he is picking out the reference of ‘human’ in some other, more complex way, as you suggest, then I’d say he should just tell us what he really means, so that we can proceed to consider his actual position instead of speculating about what he might have meant.)
Indeed, they are almost certainly picking out the reference of ‘human’ in a more complex way. Their brain is capable of outputting judgments of ‘human’ or ‘not human’, as well as ‘kinda human’ and ‘maybe human’. The set of all things judged ‘human’ by this brain is an extensional definition for their concept of ‘human’. The prototype theory of semantic categories tells us that this extension is unlikely to correspond to an intelligible, simple intension.
Well, they could say that the property they care about is “beings which are judged by Qiaochu’s brain to be human”. (Here we need ‘Qiaochu’s brain’ to be a rigid designator.) But the information content of this formula is huge.
You could demand that your interlocutor approximate their concept of ‘human’ with an intelligible intensional definition. But they have explicitly denied that they are obligated to do this.
So Qiaochu is not using ‘human’ in the standard, scientific definition of that term; is implying that his moral views do not face the argument from marginal cases; is not clearly saying what he means by ‘human’; and is denying that he is under an obligation to provide an explicit definition. Is there any way one could have a profitable argument with such a person?
I guess so; I guess so; I guess so; and I guess so.
You are trying through argument to cause a person to care about something they do not currently care about. This seems difficult in general.
It was Qiaochu who initially asked for arguments for caring about non-human animals.