What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?
Huh. I’m drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”. I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Why would the suffering of one species be more important than the suffering of another?
Because one of those species is mine?
I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern’s The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we’re currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we’re rebelling within nature.
I care about humans because I think that in principle I’m capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them… I can’t do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds “natural resources.” And natural resources should be conserved, of course (for the sake of future humans), but I don’t assign them moral value.
Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?
Yes! We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures, but that’s not because nonhumans don’t matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn’t make it okay.
We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have.
I’m more than willing to agree that our ancestors were factually confused, but I think it’s important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.
I think our ancestors were primarily factually, rather than morally, confused. I don’t see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
(I have no idea how consciousness works, so in general, I can’t answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can’t affect what the program is actually doing.
humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead
That doesn’t follow if it turns out that preventing animal suffering is sufficiently cheap.
I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
I assume you’re being tongue-in-cheek here
Nope.
White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.
What about unconscious people?
Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.
I don’t know why you got a down-vote; these are good questions.
What about unconscious people?
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
So what’s your position on abortion?
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
Plus many fish can participate in their own societies.
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?
If they’re intelligent enough we can still trade with them, and that’s fine.
Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Again, morality doesn’t behave like science.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society.
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I’m skeptical of the claim that any fish have societies in a meaningful sense.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If they’re intelligent enough we can still trade with them, and that’s fine.
If we can’t trade with them for some reason, it’s still not OK to torture them.
The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
It seems damningly arbitrary to me.
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer
Your intuition, not mine.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
that other models can handle with only a single generalization.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
the import of suffering is not completely dependent on the import of socializing,
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
Then we may end up saying that some groups of humans deserve more rights than others, in a non-meritocratic way. Is that your worry?
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.”
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
I don’t think most fish have complicated enough minds for this to be true.
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
this is why I think the analogy to science is inappropriate.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
I would be tempted to ascribe moral value to the prosthetic, not the fish.
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm. Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest. And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
I’ve seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:
But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.
Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that’s wrong. What was bad about witch hunts was:
People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the “trial” process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we’d carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).
So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there’s no such crime in the first place), but we shouldn’t therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
There is a difference between “we should take precautions to make sure the witch doesn’t blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual” and “let’s just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc.” Regardless of what you think would happen in practice (fear makes people do all sorts of things), it’s clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we’re not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
That’s two questions (“what drives moral progress” and “how can you distinguish moral progress from a random walk”). They’re both interesting, but the former is not particularly relevant to the current discussion. (It’s an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don’t have link to specific posts atm] that it’s technological advancement that drives what we think of as “moral progress”.)
As for how I can distinguish it from a random walk — that’s harder. However, my objection was to Lewis’s assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we’ve made moral progress per se to make my objection.
If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
… or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so)
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
I don’t think either of those claims are justified. Do you think they are?
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
On why the suffering of one species would be more important than the suffering of another:
Because one of those species is mine?
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
Does that also apply to race and gender? If not, why not?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother?
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
And does it at all bother you that racists or sexists can use an analogous line of defense?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?
I should probably clarify—when I said that valuing humans over animals strikes me as arbitrary, I’m saying that it’s arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that’s not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person’s belief in that position, regardless of whether that effect is “logical”.)
I’ve been meaning to write a post about how I think it’s a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.
(You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.)
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I’m not sure if it’s that problematic as long as you keep in mind that “axioms” is really just shorthand for something like “moral subprograms” or “moral dynamics”.
I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish—in other words, unless your argument builds on things that the mind’s decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind’s preferences.
You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.
I’m not really sure of what you mean here. For one, I didn’t say that my moral framework can’t distinguish humans and non-humans—I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people’s feelings of safety, which would contribute to the creation of much more suffering than killing animals would.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I’d program into an AI.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on [...]
Well, I don’t think I should care what I care about. The important thing is what’s right, and my emotions are only relevant to the extent that they communicate facts about what’s right. What’s right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn’t hold too much import, on pain of moral wireheading/acceptance of a fake utility function.
(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that’s available in practice, but that doesn’t mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and “in deciding what to do, don’t pay attention to what you want” isn’t very useful advice. (It also makes any kind of instrumental rationality impossible.)
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn’t mean that you expect them to be accurate, they are just the best you have available in practice.
Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
I’m not a very well educated person in this field, but if I may:
I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They’re no more enemies than one’s preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human—that is, some things are important only because I’m a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one’s mind, even when one KNOWS it is wrong, can be a source of pain, I’ve found—hypocrisy and indecision are not my friends.
Hope I didn’t make a mess of things with this comment.
I’m roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:
1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction—if someone switches to an exploitation phase “too early”, then over time, their values may actually shift over to what the person thought they were.
2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don’t match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn’t use terminology like exploration/exploitation that implies that it would be just one of those.
But to some extent, our conscious models of our values do shape our unconscious values in that direction
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday’s puzzle doesn’t address the problem of solving yesterday’s puzzle. And idealized values don’t describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn’t change, even if the tendency to be interested in a particular problem does. The problem doesn’t get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.
Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it
The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any “correction” discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term “values” what I would call intermediate conclusions, but then again I’m interested in you noticing the particular idea that I refer to with this term.)
if we realize that our conscious values don’t match our unconscious ones
I don’t think “unconscious values” is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I’m talking about.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them
This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can’t compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of “creation” will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.
(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want.
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that “implicit idealized value” is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)
I do not understand why the concept would be in relevant to our personal lives, however.
If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can’t get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
why the suffering of red-haired people should count equally to the suffering of black-haired people
I’ve interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I’m somewhat confident that there’s no big difference in average between the ways they suffer . I’m nowhere near as confident about fish.
I already addressed that uncertainty in my comment:
Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”.
To elaborate: it’s perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says “human suffering is more important” isn’t saying that: they’re saying that they wouldn’t care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It’s saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
Huh. I’m drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”. I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Because one of those species is mine?
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern’s The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we’re currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we’re rebelling within nature.
I care about humans because I think that in principle I’m capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them… I can’t do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds “natural resources.” And natural resources should be conserved, of course (for the sake of future humans), but I don’t assign them moral value.
Yes! We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures, but that’s not because nonhumans don’t matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn’t make it okay.
I’m more than willing to agree that our ancestors were factually confused, but I think it’s important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I think our ancestors were primarily factually, rather than morally, confused. I don’t see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
(I have no idea how consciousness works, so in general, I can’t answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can’t affect what the program is actually doing.
That doesn’t follow if it turns out that preventing animal suffering is sufficiently cheap.
I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
Nope.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
What about unconscious people?
So what’s your position on abortion?
I don’t know why you got a down-vote; these are good questions.
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If they’re intelligent enough we can still trade with them, and that’s fine.
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
What data?
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If we can’t trade with them for some reason, it’s still not OK to torture them.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
Your intuition, not mine.
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
Agreed, but this is why I think the analogy to science is inappropriate.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
I’m not yet sure what I want to do with that.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm.
Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest.
And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
Fair enough.
I’ve seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:
Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that’s wrong. What was bad about witch hunts was:
People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the “trial” process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we’d carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).
So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there’s no such crime in the first place), but we shouldn’t therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.
2 was based on a Bible quote, I think. The state hanged witches.
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
Who knows what kind of things a real witch could do to an executioner, for that matter?
There is a difference between “we should take precautions to make sure the witch doesn’t blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual” and “let’s just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc.” Regardless of what you think would happen in practice (fear makes people do all sorts of things), it’s clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we’re not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.
That’s two questions (“what drives moral progress” and “how can you distinguish moral progress from a random walk”). They’re both interesting, but the former is not particularly relevant to the current discussion. (It’s an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don’t have link to specific posts atm] that it’s technological advancement that drives what we think of as “moral progress”.)
As for how I can distinguish it from a random walk — that’s harder. However, my objection was to Lewis’s assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we’ve made moral progress per se to make my objection.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
Such as human masochists.
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
Ok, that seems workable to a first approximation.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
Yes?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
On why the suffering of one species would be more important than the suffering of another:
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
Wow! So you’ve solved friendly AI? Eliezer will be happy to hear that.
I’m pretty sure Eliezer already knew our brains contained the basis of morality.
I should probably clarify—when I said that valuing humans over animals strikes me as arbitrary, I’m saying that it’s arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that’s not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person’s belief in that position, regardless of whether that effect is “logical”.)
I’ve been meaning to write a post about how I think it’s a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.
(You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.)
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I’m not sure if it’s that problematic as long as you keep in mind that “axioms” is really just shorthand for something like “moral subprograms” or “moral dynamics”.
I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish—in other words, unless your argument builds on things that the mind’s decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind’s preferences.
I’m not really sure of what you mean here. For one, I didn’t say that my moral framework can’t distinguish humans and non-humans—I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people’s feelings of safety, which would contribute to the creation of much more suffering than killing animals would.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I’d program into an AI.
Well, I don’t think I should care what I care about. The important thing is what’s right, and my emotions are only relevant to the extent that they communicate facts about what’s right. What’s right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn’t hold too much import, on pain of moral wireheading/acceptance of a fake utility function.
(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that’s available in practice, but that doesn’t mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and “in deciding what to do, don’t pay attention to what you want” isn’t very useful advice. (It also makes any kind of instrumental rationality impossible.)
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn’t mean that you expect them to be accurate, they are just the best you have available in practice.
Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
I’m not a very well educated person in this field, but if I may:
I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They’re no more enemies than one’s preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human—that is, some things are important only because I’m a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one’s mind, even when one KNOWS it is wrong, can be a source of pain, I’ve found—hypocrisy and indecision are not my friends.
Hope I didn’t make a mess of things with this comment.
I’m roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:
1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction—if someone switches to an exploitation phase “too early”, then over time, their values may actually shift over to what the person thought they were.
2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don’t match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn’t use terminology like exploration/exploitation that implies that it would be just one of those.
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday’s puzzle doesn’t address the problem of solving yesterday’s puzzle. And idealized values don’t describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn’t change, even if the tendency to be interested in a particular problem does. The problem doesn’t get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.
The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any “correction” discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term “values” what I would call intermediate conclusions, but then again I’m interested in you noticing the particular idea that I refer to with this term.)
I don’t think “unconscious values” is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I’m talking about.
This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can’t compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of “creation” will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.
(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that “implicit idealized value” is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)
If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can’t get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
What does this mean? It sounds like you’re talking about some kind of objective morality?
I’ve interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I’m somewhat confident that there’s no big difference in average between the ways they suffer . I’m nowhere near as confident about fish.
I already addressed that uncertainty in my comment:
To elaborate: it’s perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says “human suffering is more important” isn’t saying that: they’re saying that they wouldn’t care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It’s saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
Even less so about silverfish, despite its complex mating rituals.