I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
I assume you’re being tongue-in-cheek here
Nope.
White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.
What about unconscious people?
Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.
I don’t know why you got a down-vote; these are good questions.
What about unconscious people?
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
So what’s your position on abortion?
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
Plus many fish can participate in their own societies.
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?
If they’re intelligent enough we can still trade with them, and that’s fine.
Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Again, morality doesn’t behave like science.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society.
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I’m skeptical of the claim that any fish have societies in a meaningful sense.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If they’re intelligent enough we can still trade with them, and that’s fine.
If we can’t trade with them for some reason, it’s still not OK to torture them.
The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
It seems damningly arbitrary to me.
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer
Your intuition, not mine.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
that other models can handle with only a single generalization.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
the import of suffering is not completely dependent on the import of socializing,
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
Then we may end up saying that some groups of humans deserve more rights than others, in a non-meritocratic way. Is that your worry?
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.”
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
I don’t think most fish have complicated enough minds for this to be true.
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
this is why I think the analogy to science is inappropriate.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
I would be tempted to ascribe moral value to the prosthetic, not the fish.
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm. Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest. And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
Nope.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
What about unconscious people?
So what’s your position on abortion?
I don’t know why you got a down-vote; these are good questions.
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If they’re intelligent enough we can still trade with them, and that’s fine.
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
What data?
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If we can’t trade with them for some reason, it’s still not OK to torture them.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
Your intuition, not mine.
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
Agreed, but this is why I think the analogy to science is inappropriate.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
I’m not yet sure what I want to do with that.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm.
Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest.
And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
Fair enough.