Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
the import of suffering is not completely dependent on the import of socializing,
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
Then we may end up saying that some groups of humans deserve more rights than others, in a non-meritocratic way. Is that your worry?
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)