I asked this before but don’t remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?
What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?
Huh. I’m drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”. I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Why would the suffering of one species be more important than the suffering of another?
Because one of those species is mine?
I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern’s The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we’re currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we’re rebelling within nature.
I care about humans because I think that in principle I’m capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them… I can’t do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds “natural resources.” And natural resources should be conserved, of course (for the sake of future humans), but I don’t assign them moral value.
Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?
Yes! We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures, but that’s not because nonhumans don’t matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn’t make it okay.
We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have.
I’m more than willing to agree that our ancestors were factually confused, but I think it’s important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.
I think our ancestors were primarily factually, rather than morally, confused. I don’t see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
(I have no idea how consciousness works, so in general, I can’t answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can’t affect what the program is actually doing.
humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead
That doesn’t follow if it turns out that preventing animal suffering is sufficiently cheap.
I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
I assume you’re being tongue-in-cheek here
Nope.
White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum.
What about unconscious people?
Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it.
I don’t know why you got a down-vote; these are good questions.
What about unconscious people?
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
So what’s your position on abortion?
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
Plus many fish can participate in their own societies.
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them?
If they’re intelligent enough we can still trade with them, and that’s fine.
Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap. Possibly they fall into a new and different trap, though?
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Again, morality doesn’t behave like science.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society.
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I’m skeptical of the claim that any fish have societies in a meaningful sense.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If they’re intelligent enough we can still trade with them, and that’s fine.
If we can’t trade with them for some reason, it’s still not OK to torture them.
The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
It seems damningly arbitrary to me.
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer
Your intuition, not mine.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
that other models can handle with only a single generalization.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
the import of suffering is not completely dependent on the import of socializing,
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
Then we may end up saying that some groups of humans deserve more rights than others, in a non-meritocratic way. Is that your worry?
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.”
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
I don’t think most fish have complicated enough minds for this to be true.
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
this is why I think the analogy to science is inappropriate.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
I would be tempted to ascribe moral value to the prosthetic, not the fish.
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm. Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest. And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
I’ve seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:
But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did.
Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that’s wrong. What was bad about witch hunts was:
People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the “trial” process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we’d carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).
So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there’s no such crime in the first place), but we shouldn’t therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
There is a difference between “we should take precautions to make sure the witch doesn’t blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual” and “let’s just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc.” Regardless of what you think would happen in practice (fear makes people do all sorts of things), it’s clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we’re not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
That’s two questions (“what drives moral progress” and “how can you distinguish moral progress from a random walk”). They’re both interesting, but the former is not particularly relevant to the current discussion. (It’s an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don’t have link to specific posts atm] that it’s technological advancement that drives what we think of as “moral progress”.)
As for how I can distinguish it from a random walk — that’s harder. However, my objection was to Lewis’s assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we’ve made moral progress per se to make my objection.
If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
… or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so)
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
I don’t think either of those claims are justified. Do you think they are?
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Suppose I prefer that brain B1 not be in state S1. Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1. The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things?
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
On why the suffering of one species would be more important than the suffering of another:
Because one of those species is mine?
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
Does that also apply to race and gender? If not, why not?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother?
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
And does it at all bother you that racists or sexists can use an analogous line of defense?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?
I should probably clarify—when I said that valuing humans over animals strikes me as arbitrary, I’m saying that it’s arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that’s not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person’s belief in that position, regardless of whether that effect is “logical”.)
I’ve been meaning to write a post about how I think it’s a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.
(You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.)
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I’m not sure if it’s that problematic as long as you keep in mind that “axioms” is really just shorthand for something like “moral subprograms” or “moral dynamics”.
I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish—in other words, unless your argument builds on things that the mind’s decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind’s preferences.
You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.
I’m not really sure of what you mean here. For one, I didn’t say that my moral framework can’t distinguish humans and non-humans—I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people’s feelings of safety, which would contribute to the creation of much more suffering than killing animals would.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I’d program into an AI.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on [...]
Well, I don’t think I should care what I care about. The important thing is what’s right, and my emotions are only relevant to the extent that they communicate facts about what’s right. What’s right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn’t hold too much import, on pain of moral wireheading/acceptance of a fake utility function.
(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that’s available in practice, but that doesn’t mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and “in deciding what to do, don’t pay attention to what you want” isn’t very useful advice. (It also makes any kind of instrumental rationality impossible.)
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn’t mean that you expect them to be accurate, they are just the best you have available in practice.
Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
I’m not a very well educated person in this field, but if I may:
I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They’re no more enemies than one’s preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human—that is, some things are important only because I’m a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one’s mind, even when one KNOWS it is wrong, can be a source of pain, I’ve found—hypocrisy and indecision are not my friends.
Hope I didn’t make a mess of things with this comment.
I’m roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:
1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction—if someone switches to an exploitation phase “too early”, then over time, their values may actually shift over to what the person thought they were.
2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don’t match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn’t use terminology like exploration/exploitation that implies that it would be just one of those.
But to some extent, our conscious models of our values do shape our unconscious values in that direction
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday’s puzzle doesn’t address the problem of solving yesterday’s puzzle. And idealized values don’t describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn’t change, even if the tendency to be interested in a particular problem does. The problem doesn’t get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.
Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it
The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any “correction” discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term “values” what I would call intermediate conclusions, but then again I’m interested in you noticing the particular idea that I refer to with this term.)
if we realize that our conscious values don’t match our unconscious ones
I don’t think “unconscious values” is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I’m talking about.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them
This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can’t compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of “creation” will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.
(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want.
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that “implicit idealized value” is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)
I do not understand why the concept would be in relevant to our personal lives, however.
If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can’t get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
why the suffering of red-haired people should count equally to the suffering of black-haired people
I’ve interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I’m somewhat confident that there’s no big difference in average between the ways they suffer . I’m nowhere near as confident about fish.
I already addressed that uncertainty in my comment:
Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”.
To elaborate: it’s perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says “human suffering is more important” isn’t saying that: they’re saying that they wouldn’t care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It’s saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.
I’m not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?)
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?
But non-human animal suffering is likely to be orders of magnitude more common.
I don’t mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.
Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What’s so special about the particular category you picked?
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
it seems you should care about at least some nonhuman animals.
I’m willing to entertain this possibility. I’ve recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don’t care about fish or chickens. I don’t think I can have a meaningful relationship with a fish or a chicken even in principle.
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer’s, etc.] never mind.
(I could steelman my yesterday self by noticing that even though small children aren’t similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
Doesn’t follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There’s no law of ethics saying that the parameter space has to be small.
It’s not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn’t stand up to such a metric, but—although I can’t speak for Qiaochu—that’s a bullet I’m willing to bite.
Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some animals.
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person’s modus ponens is etc.
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans.
Every human I know cares at least somewhat about animal suffering. We don’t like seeing chickens endlessly and horrifically tortured—and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won’t disturb our peace of mind. I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
Are you certain you don’t care?
Are you certain that you won’t end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn’t care at all about black people (but would regret and abandon this apathy if they knew all the facts)?
If you feel there’s any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
Are you certain you don’t care?
No, or else I wouldn’t be asking for arguments.
If you feel there’s any chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
I don’t either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens’ psychological alienness to me will seem a difference of degree more than of kind. It’s a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.
Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn’t be tortured. (And not just because I don’t want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don’t appear to exhaust it..)
That’s not a good point, that’s a variety of Pascal’s Mugging: you’re suggesting that the fact that the possible consequence is large (“I tortured beings” is a really negative thing) means that even fi the chance is small, you should act on that basis.
I’m telling you that I don’t care whether they suffer.
I don’t believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid “gateway torture” complications.)
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.
So I’m reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don’t.
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between “watch that video, no animal was harmed” versus “watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))”, which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn’t be. Just checking.)
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
Reminds me of … Note the name of the website. She doesn’t look happy! “I am altering the deal. Pray I don’t alter it any further.”
Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active …
“squid” is slang for a GBP, i.e. Pound Sterling, although I’m more used to hearing the similar “quid.” One hundred of them can be referred to as a “biscuit,” apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a “benjamin.”
That is, what are TheOtherDave’s preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they’re paid some cash.
In this case it seems to. It’s the first time I recall encountering it but I’m not British and my parsing of unfamiliar and ‘rough’ accents is such that if I happened to have heard someone say ‘squid’ I may have parsed it as ‘quid’, and discarded the ‘s’ as noise from people saying a familiar term in a weird way rather than a different term.
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question. Well, to the extent that my noncommital response can be considered an answer to any question at all.
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba’s part that serves no real purpose.
So to be clear—you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn’t be able to tell which was which if you hadn’t told me. You don’t pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it’s fake) or watching the real-harm video and receiving £100.
If the reward is £100, I’ll take the £100; if it’s an actual biscuit, I prefer to watch the fake-harm video.
I’m genuinely unsure, not least because of your perplexing unpacking of “biscuit”.
Both examples are unpleasant; I don’t have a reliable intuition as to which is more so if indeed either is.
I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I’m motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I’m again genuinely unsure of. In the real world I usually assume that when I’m not sure it’s the latter, but this is such a contrived scenario that I’m not confident of that either.
If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn’t.
I don’t want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don’t have moral valence (in another comment I gave the example of seeing corpses get raped).
I might also be willing to assign dolphins and monkeys moral value (I haven’t made up my mind about this), but not most animals.
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?
Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it’s also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
I’ll chime in to comment that QiaochuYuan’s[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his “human” I would substitute something like “sapient, self-aware beings of approximately human-level intelligence and above” and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses “approximately human” to mean roughly this).
So, please reconsider your disbelief.
[1] Sorry, the board software is doing weird things when I put in underscores...
If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don’t think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
Do you consider young children and very low-intelligence people to be morally-relevant?
(If—in the case of children—you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns… where exactly? I’m not sure.
Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don’t know whether such humans exist (most people with Down syndrome don’t quite seem to fit that criterion, for instance).
There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can’t? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that’s good or bad is a separate question).
So: it’s complicated. But to answer practical questions: I don’t consider infanticide the moral equivalent of murder (although it’s reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.
I hope that answers your question; if not, I’ll be happy to elaborate further.
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts (“becomes one with the force”). Then the flesh would remain corporeal for consumption.
The real ethical test would be would I freeze yoda’s head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I’d choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan’s footsteps, and has good reason to believe that he will be able to do so.
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
I wouldn’t eat flies or squids either. But I know that that’s a cultural construct.
Let’s ask another question: would I care if someone else eats Yoda?
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable.
In practice? In common Yoda-eating practice? Something about down to earth ‘in practice’ empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps “would be, presumably, correlated with”.
If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
In Yoda’s case he could even have just asked for permission from Yoda’s force ghost. Jedi add a whole new level of meaning to “Living Will”.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
I really don’t think statements of the form “X is, in practice, correlated with Y” should apply to situations where X has literally never occurred. You might want to say “I expect that X would, in practice, be correlated with Y” instead.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn’t, or you would think the reaction irrational? I don’t know.
However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.
~
Also, meat is delicious and contains protein.
One can get both protein and deliciousness from non-meat sources.
~
Alternatively, how much would you be willing to pay me to stop eating meat?
I’m not sure. I don’t think there’s a way I could make that transaction work.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
Some interesting things about this example:
Distance seems to have a huge impact when it comes to the bystander effect, and it’s not clear that it’s irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.
Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).
Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they’re allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.
To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Well, and what would you say to someone who thought that?
Also, do you really not value animals?
I don’t know. It doesn’t feel like I do. You could try to convince me that I do even if you’re a moral anti-realist. It’s plausible I just haven’t spent enough time around animals.
I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person’s character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it’s not because I value corpses.
One can get both protein and deliciousness from non-meat sources.
My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.
I’m not sure. I don’t think there’s a way I could make that transaction work.
Really? This can’t be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
Right now, I don’t know. I feel like it would be playing a losing game. What would you say?
I would probably say something like “you just haven’t spent enough time around them. They’re less different from you than you think. Get to know them, and you might come to see them as not much different from the people you’re more familiar with.” In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I’m not sure how I would go about getting to know a pig.
I’m not sure how I would do that. Would you kick a puppy? If not, why not?
No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.
How could I verify that you actually refrain from eating meat?
Oh, that’s what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn’t convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
“No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.”
All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather
a) give up meat for a month
b) beat a puppy to death
I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn’t like to see chickens suffering), but the contrast is quite telling.
For what it’s worth, I also wouldn’t treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don’t have to watch. There’s quite a contrast there, as well, but it seems to have little to do with suffering.
That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered. Probably also to watching it suffer while not being slaughtered.
Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values.
The relevant sense of changing values is change of someone else’s purposeful behavior. The philosophical classification of your views doesn’t seem like useful evidence about that possibility.
I don’t understand what that means for my situation, though. How am I supposed to argue him out of his current values?
I mean, it’s certainly possible to change someone’s values through anti-realist argumentation. My values were changed in that way several times. But I don’t know how to do it.
How am I supposed to argue him out of his current values?
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
If moral realism were true, there would be a very obvious path to arguing someone out of their values—argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.
I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
That doesn’t necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance?
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
If you’re using sapience as a synonym for language, language is not a crisp category either.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
Ultimately this just seems like a veiled way to specially privilege humans,
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us is that it is specifically human suffering.
No, the latter was an afterthought. The discussion begins here.
I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I don’t think which reasons happen to psychologically motivate us matters here.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
No, the latter was an afterthought. The discussion begins here.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
If that was the case there would be no one to do the discussing.
What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?
I found it interesting to compare “this is the price at which we could buy animals not existing” to the “this is the price people are willing to pay for animals to exist so they can eat them,” because it looks like the second is larger, often by orders of magnitude. (This shouldn’t be that surprising for persuasion; if you can get other people to spend their own resources, your costs are much lower.)
It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be ‘factory farmed’ in the same way. [Edit: It appears that conditions for fish on fish farms are actually pretty bad, to the point that many species of fish cannot survive modern farming techniques. So, no comment on the relative badness.]
It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be ‘factory farmed’ in the same way. (It seems to me that fish farms are much more like their natural habitat than chicken farms are like their natural habitat, but that may be mistaken.)
From what I know, fish farming doesn’t sound pleasant, though perhaps it’s not nearly as bad as chicken farming.
If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you’re pretty ignorant of factory farming. Maybe you haven’t seen enough propaganda?
Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there? Indeed, your post claimed to be about days of life.
Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it’s just that the propaganda works on them.
If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you’re pretty ignorant of factory farming.
I never said they were in the same ballpark. Just that fish farming is also something I don’t like.
~
Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there?
Yes, I do.
~
Indeed, your post claimed to be about days of life.
I agree that might not make much sense for fish, except in so far as farming causes more fish to be birthed than otherwise would.
~
Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it’s just that the propaganda works on them.
I think this is a bias that is present in any kind of person that cares about advocating for or against a cause.
It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be ‘factory farmed’ in the same way. (It seems to me that fish farms are much more like their natural habitat than chicken farms are like their natural habitat, but that may be mistaken.)
Well, they can move more, but on the other hand they tend to pollute each others’ environment in a way that terrestrial farmed animals do not, meaning that not all commercially fished species can survive being farmed with modern techniques, and those which can are not necessarily safe for humans to eat in the same quantities.
YMMV, but the argument that did it for me was Mylan Engel, Jr’s argument, as summarized and nicely presented here.
On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.
Looking over that argument, in the second link, I notice that those same premises would appear to support the conclusion that the most morally correct action possible would be to find some way to sterilize every vertabrate (possibly through some sort of genetically engineered virus). If there is no next generation—of anything, from horses to cows to tigers to humans to chickens—then there will be no pain and suffering experienced by that next generation. The same premises would also appear to support the conclusion that, having sterilised every vertabrate on the planet, the next thing to do is to find some painless way of killing every vertebrate on the planet, lest they suffer a moment of unnecessary pain or suffering.
I find both of these potential conclusions repugnant; I recognise this as a mental safety net, warning me that I will likely regret actions taken in support of these conclusions in the long term.
This is an argument for vegetarianism, not for caring about animal suffering: many parts of this argument have nothing to do with animal suffering but are arguments that humans would be better off if we ate less meat, which I’m also willing to entertain (since I do care about human suffering), but I was really asking about animal suffering.
I’m not offering a higher price since it seems cost ineffective compared to other opportunities, but I’m curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)
I started reading the argument (in your second link), racked up a full hand of premises I disagreed with or found to be incoherent or terribly ill-defined before getting to so much as #10, and stopped reading.
Then I decided that no, I really should examine any argument that convinced an intelligent opponent, and read through the whole thing (though I only skimmed the objections, as they are laughably weak compared to the real ones).
Turns out my first reaction was right: this is a silly argument. Engel lists a number of premises, most of which I disagree with, launches into a tangent about environmental impact, and then considers objections that read like the halfhearted flailings of someone who’s already accepted his ironclad reasoning. As for this:
OBJ6: What if I just give up one of these beliefs [(p1) – (p16)]?
Engel says, “After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false. Now, presumably, you already think your belief system is for the most part reasonable, or you would have already made significant changes in it. So, you will want to reject as few beliefs as possible. Since (p1) – (p16) are rife with implications, rejecting several of these propositions would force you to reject countless other beliefs on pain of incoherence, whereas accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” (883).
It makes me want to post the “WAT” duck in response. Like, is he serious? Or is this actually a case of carefully executed trolling? I begin to suspect the latter...
Edit: Oh, and as Qiaochu_Yuan says, the argument assumes that we care about animal suffering, and so does not satisfy the request in the grandparent.
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It’s that robustness of the argument, drawing more on many weak points than one strong one, that convinced me.
I don’t understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn’t simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that’s a prime example of a moment to notice the fact that you’re confused. It’s possible you were reading him as saying something he wasn’t saying.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn’t make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain properties; so if something else has the same properties, to be consistent you should care about it also. (Obviously this depends on what properties you pick.)
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument.
That’s possible, but I don’t think that’s the case. But let me address the argument in a bit more detail and perhaps we’ll see if I am indeed misunderstanding something.
First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It’s not like they’re independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just “probably wrong” or anything so prosaic.
The quoted paragraph:
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
Now, presumably, you already think your belief system is for the most part reasonable, or you would have already made significant changes in it. So, you will want to reject as few beliefs as possible.
??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don’t think any of your beliefs are false, or else you’d reject them. If you find that some of your beliefs are false, you will want to reject them, because if you’re interested in truth then you want to hold zero false beliefs.
Since (p1) – (p16) are rife with implications, rejecting several of these propositions would force you to reject countless other beliefs on pain of incoherence, whereas accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” (883).
I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and… I just… I don’t know what to say.
And when I read your commentary on the above, I get the same ”… what the heck? Is he… is he serious?” reaction.
I don’t understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn’t simply reject whichever premises get in the way of the conclusions you value.
What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.
p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian.
… and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating… anything but absurd? Isn’t that one of the Great Epistemological Sins that Less Wrong warns us about?
As for the duck comment… professional philosophers troll people all the time. Having never encountered Engel’s writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn’t make that assumption.
Engel apparently claims that his reader already holds these beliefs, among others:
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
(Hi, sorry for the delayed response. I’ve been gone.)
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
“After all, as a philosopher [assuming we all love wisdom and want to know the best way in which to live], you are interested in more than mere consistency; you are interested in truth. Consequently, you will not reject just any belief(s) you think most likely to be false.
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Just the standard stuff you’d get in high school or undergrad college. [...]
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality.
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
“Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Why do you character the quoted belief as “motivated”?
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
And, in any case, why are we singling out this particular belief for consistency-checking?
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
Alright then. To the object level!
Engel claims that you hold the following beliefs:
Let’s see...
(p1) Other things being equal, a world with less pain and suffering is better than a world with more pain and suffering.
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
(p2) A world with less unnecessary suffering is better than a world with more unnecessary suffering.
See (p1).
(p3) Unnecessary cruelty is wrong and prima facie should not be supported or encouraged.
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
(p4) We ought to take steps to make the world a better place.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
(p4’) We ought to do what we reasonably can to avoid making the world a worse place.
Agreed.
(p5) A morally good person will take steps to make this world a better place and even stronger steps to avoid making the world a worse place.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
(p6) Even a minimally decent person would take steps to reduce the amount of unnecessary pain and suffering in the world, if s/he could do so with very little effort.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
(p7) I am a morally good person.
See response to (p5); this is not very meaningful. So, no.
(p8) I am at least a minimally decent person.
Yep.
(p9) I am the sort of person who certainly would take steps to help reduce the amount of pain and suffering in the world, if I could do so with very little effort.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
(p10) Many nonhuman animals (certainly all vertebrates) are capable of feeling pain.
This seems relatively uncontroversial.
(p11) It is morally wrong to cause an animal unnecessary pain or suffering.
Nope. (And see (p1) re: “suffering”.)
(p12) It is morally wrong and despicable to treat animals inhumanely for no good reason.
Nope.
(p13) We ought to euthanize untreatably injured, suffering animals to put them out of their misery whenever feasible.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
(p14) Other things being equal, it is worse to kill a conscious sentient animal than it is to kill plant.
Nope.
(p15) We have a duty to help preserve the environment for future generations (at least for future human generations).
I’ll agree with this to a reasonable extent.
(p16) One ought to minimize one’s contribution toward environmental degradation, especially in those ways requiring minimal effort on one’s part.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
“While you do not have to believe all of (p1) – (p16) for my argument to succeed, the more of these propositions you believe, the greater your commitment to the immorality of eating meat”
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Yes I was. My point was that if one writes a program that purports to prove that
“eating meat is immoral” actually follow from the propositions...
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as:
int calculate_the_conclusion(string premises_acceptedbyreader[])
{
int result=0;
foreach(mypremise in reader’s premise){result++;}
return result.
}
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.
I don’t think there’s a subthread here about posthumans here yet, which surprises me. Most of the other points I’d think to make have been made by others.
Several times you specify that you care about humanity, because you are able to have relationships with humans. A few questions:
1) SaidAchmiz, whose views seem similar to yours, specified they hadn’t owned pets. Have you owned pets?
While this may vary from person to person, it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).
I’ve also recently made a friend with two pet turtles. One of the turtles seems pretty bland and unresponsive, but the other seems incredibly interested in interaction. I expect that some amount of the perceived relationship between my friend and their turtle is human projection, but I’ve still updated quite a bit on the relative potential-sentience of turtles. (Though my friend’s veterinarian did said the turtle is an outlier in terms of how much personality a turtle expresses)
2) You’ve noted that you don’t care about babyeaters. Do you care about potential posthumans who share all values you currently have, but have new values you don’t care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can’t understand? Do you expect those humans to care about you?
I’m not sure how good an argument it is that “we should care about things dumber than us because we’d want smarter things to care about us”, in the context of aliens who might not share our values at all. But it seems at least a little relevant, when specifically concerning the possibility of trans-or-posthumans.
3) To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they’re jerks, or don’t share enough interests with you), do you consider them to have less moral worth? If not, why not?
Intellectually, I’m interested in the question: what moral framework would Extrapolatedd-Qiaochu-Yuan endorse (since, again, I’m an anti-realist).
it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).
People are also able to form relationships of this kind with, say, ELIZA or virtual pets in video games or waifus. This is an argument in favor of morally valuing animals, but I think it’s a weak one without more detail about the nature of these relationships and how closely they approximate full human relationships.
Do you care about potential posthumans who share all values you currently have, but have new values you don’t care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can’t understand? Do you expect those humans to care about you?
Depends. If they can understand me well enough to have a relationship with me analogous to the relationship an adult human might have with a small child, then sure.
To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they’re jerks, or don’t share enough interests with you), do you consider them to have less moral worth? If not, why not?
I hid a lot of complexity in “in principle.” This objection also applies to humans who are in comas, for example, but a person being in a coma or not sharing my interests is a contingent fact, and I don’t think contingent facts should affect what beings have moral worth. I can imagine possible worlds reasonably close to the actual one in which a person isn’t in a coma or does share my interests, but I can’t imagine possible worlds reasonably close to the actual one in which a fish is complicated enough for me to have a meaningful relationship with.
I asked this before but don’t remember if I got any good answers: I am still not convinced that I should care about animal suffering. Human suffering seems orders of magnitude more important. Also, meat is delicious and contains protein. What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian? Alternatively, how much would you be willing to pay me to stop eating meat?
Huh. I’m drawing a similar blank as if someone asked me to provide an argument for why the suffering of red-haired people should count equally to the suffering of black-haired people. Why would the suffering of one species be more important than the suffering of another? Yes, it is plausible that once your nervous system becomes simple enough, you no longer experience anything that we would classify as suffering, but then you said “human suffering is more important”, not “there are some classes of animals that suffer less”. I’m not sure I can offer a good argument against “human suffering is more important”, because it strikes me as so completely arbitrary and unjustified that I’m not sure what the arguments for it would be.
Because one of those species is mine?
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they’re so similar. If I were to imagine a collection of arbitrary moralities, I’d expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern’s The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we’re currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we’re rebelling within nature.
I care about humans because I think that in principle I’m capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them… I can’t do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds “natural resources.” And natural resources should be conserved, of course (for the sake of future humans), but I don’t assign them moral value.
Yes! We know stuff that our ancestors didn’t know; we have capabilities that they didn’t have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I’m going to protect me and my friends and other humans before worrying about other creatures, but that’s not because nonhumans don’t matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn’t make it okay.
I’m more than willing to agree that our ancestors were factually confused, but I think it’s important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I think our ancestors were primarily factually, rather than morally, confused. I don’t see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Cool. Then we’re in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
(I have no idea how consciousness works, so in general, I can’t answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can’t affect what the program is actually doing.
That doesn’t follow if it turns out that preventing animal suffering is sufficiently cheap.
I’m not sure moral intuitions divide as cleanly into factual and nonfactual components as this suggests. Learning new facts can change our motivations in ways that are in no way logically or empirically required of us, because our motivational and doxastic mechanisms aren’t wholly independent. (For instance, knowing a certain fact may involve visualizing certain circumstances more concretely, and vivid visualizations can certainly change one’s affective state.) If this motivational component isn’t what you had in mind as the ‘moral’, nonfactual component of our judgments, then I don’t know what you do have in mind.
I don’t think this is specifically relevant. I upvoted your ‘blue robot’ comment because this is an important issue to worry about, but ‘that’s a black box’ can’t be used as a universal bludgeon. (Particularly given that it defeats appeals to ‘isHuman’ even more thoroughly than it defeats appeals to ‘isSuffering’.)
I assume you’re being tongue-in-cheek here, but be careful not to mislead spectators. ‘Human life isn’t perfect, ergo we are under no moral obligation to eschew torturing non-humans’ obviously isn’t sufficient here, so you need to provide more details showing that the threats to humanity warrant (provisionally?) ignoring non-humans’ welfare. White slave-owners had plenty of white-person-specific problems to deal with, but that didn’t exonerate them for worrying about their (white) friends and family to the extreme exclusion of black people.
I think of moral confusion as a failure to understand your actual current or extrapolated moral preferences (introspection being unreliable and so forth).
Nope.
I don’t think this analogy holds water. White slave-owners were aware that their slaves were capable of learning their language and bearing their children and all sorts of things that fish can’t do.
Sure. And humans are aware that fish are capable of all sorts of things that rocks and sea hydras can’t do. I don’t see a relevant disanalogy. (Other than the question-begging one ‘fish aren’t human’.)
I guess that should’ve ended ”...that fish can’t do and that are important parts of how they interact with other white people.” Black people are capable of participating in human society in a way that fish aren’t.
A “reversed stupidity is not intelligence” warning also seems appropriate here: I don’t think the correct response to disagreeing with racism and sexism is to stop discriminating altogether in the sense of not trying to make distinctions between things.
I don’t think we should stop making distinctions altogether either; I’m just trying not to repeat the mistakes of the past, or analogous mistakes. The straw-man version of this historical focus is to take ‘the expanding circle’ as a universal or inevitable historical progression; the more interesting version is to try to spot a pattern in our past intellectual and moral advances and use it to hack the system, taking a shortcut to a moral code that’s improved far beyond contemporary society’s hodgepodge of standards.
I think the main lesson from ‘expanding circle’ events is that we should be relatively cautious about assuming that something isn’t a moral patient, unless we can come up with an extremely principled and clear example of a necessary condition for moral consideration that it lacks. ‘Black people don’t have moral standing because they’re less intelligent than us’ fails that criterion, because white children can be unintelligent and yet deserve to be treated well. Likewise, ‘fish can’t participate in human society’ fails, because extremely pathologically antisocial or socially inept people (of the sort that can’t function in society at all) still shouldn’t be tortured.
(Plus many fish can participate in their own societies. If we encountered an extremely alien sentient species that was highly prosocial but just found it too grating to be around us for our societies to mesh, would we be justified in torturing them? Likewise, if two human civilizations get along fine internally but have social conventions that make fruitful interaction impossible, that doesn’t give either civilization the right to oppress the other.)
On the other hand, ‘rocks aren’t conscious’ does seem to draw on a good and principled necessary condition—anything unconscious (hence incapable of suffering or desiring or preferring) does seem categorically morally irrelevant, in a vacuum. So excluding completely unconscious things has the shape of a good policy. (Sure, it’s a bit of an explanatory IOU until we know exactly what the neural basis of ‘consciousness’ is, but ‘intelligent’ and ‘able to participate in human society’ are IOUs in the same sense.) Likewise for gods and dead bodies—the former don’t exist, and the latter again fail very general criteria like ‘is it conscious?’ and ‘can it suffer?’ and ‘can it desire?’. These are fully general criteria, not ad-hoc or parochial ones, so they’re a lot less likely to fall into the racism trap.
Possibly they fall into a new and different trap, though? Even so, I feel more comfortable placing most of the burden of proof on those who want to narrow our circle, rather than those who want to broaden it. The chances of our engineering (or encountering in the stars) new species that blur the lines between our concepts of psychological ‘humanity’ and ‘inhumanity’ are significant, and that makes it dangerous to adopt a policy of ‘assume everything with a weird appearance or behavior has no moral rights until we’ve conclusively proved that its difference from us is only skin-deep’.
What about unconscious people?
So what’s your position on abortion?
I don’t know why you got a down-vote; these are good questions.
I’m not sure there are unconscious people. By ‘unconscious’ I meant ‘not having any experiences’. There’s also another sense of ‘unconscious’ in which people are obviously sometimes unconscious — whether they’re awake, aware of their surroundings, etc. Being conscious in that sense may be sufficient for ‘bare consciousness’, but it’s not necessary, since people can experience dreams while ‘unconscious’.
Supposing people do sometimes become truly and fully unconscious, I think this is morally equivalent to dying. So it might be that in a loose sense you die every night, as your consciousness truly ‘switches off’ — or, equivalently, we could say that certain forms of death (like death accompanying high-fidelity cryonic preservation) are in a loose sense a kind of sleep. You say /pəˈteɪtəʊ/, I say /pəˈteɪtoʊ/. The moral rights of dead or otherwise unconscious people would then depend on questions like ‘Do we have a responsibility to make conscious beings come into existence?’ and ‘Do we have a responsibility to fulfill people’s wishes after they die?‘. I’d lean toward ‘yes’ on the former, ‘no but it’s generally useful to act as though we do’ on the latter.
Complicated. At some stages the embryo is obviously unconscious, for the same reason some species are obviously unconscious. It’s conceivable that there’s no true consciousness at all until after birth — analogously, it’s possible all non-humans are zombies — but at this point I find it unlikely. So I think mid-to-late-stage fetuses do have some moral standing — perhaps not enough for painlessly killing them to be bad, but at least enough for causing them intense pain to be bad. (My view of chickens is similar; suffering is the main worry rather than death.) The two cases are also analogous in that some people have important health reasons for aborting or for eating meat.
The original statement of my heuristic for deciding moral worth contained the phrase “in principle” which was meant to cover cases like this. A human in a contingent circumstance (e.g. extremely socially inept, in a coma) that prevents them from participating in human society is unfortunate, but in possible worlds very similar to this one they’d still be capable of participating in human society. But even in possible worlds fairly different from this one, fish still aren’t so capable.
I also think the reasoning in this example is bad for general reasons, namely moral heuristics don’t behave like scientific theories: falsifying a moral hypothesis doesn’t mean it’s not worth considering. Heuristics that sometimes fail can still be useful, and in general I am skeptical of people who claim to have useful moral heuristics that don’t fail on weird edge cases (sufficiently powerful such heuristics should constitute a solution to friendly AI).
I’m skeptical of the claim that any fish have societies in a meaningful sense. Citation?
If they’re intelligent enough we can still trade with them, and that’s fine.
I don’t think this is analogous to the above case. The psychological unity of mankind still applies here: any human from one civilization could have been raised in the other.
Yes: not capturing complexity of value. Again, morality doesn’t behave like science. Looking for general laws is not obviously a good methodology, and in fact I’m pretty sure it’s a bad methodology.
‘Your theory isn’t complex enough’ isn’t a reasonable objection, in itself, to a moral theory. Rather, ‘value is complex’ is a universal reason to be less confident about all theories. (No theory, no matter how complex, is immune to this problem, because value might always turn out to be even more complex than the theory suggests.) To suggest that your moral theory is more likely to be correct than a simpler alternative merely because it’s more complicated is obviously wrong, because knowing that value is complex tells us nothing about how it is complex.
In fact, even though we know that value is complex, a complicated theory that accounts for the evidence will almost always get more wrong than a simple theory that accounts for the same evidence—a more detailed map can be wrong about the territory in more ways.
Interestingly, in all the above respects human morality does behave like any other empirical phenomenon. The reasons to think morality is complex, and the best methods for figuring out exactly how it is complex, are the same as for any complex natural entity. “Looking for general laws” is a good idea here for the same reason it’s a good idea in any scientific endeavor; we start by ruling out the simplest explanations, then move toward increasing complexity as the data demands. That way we know we’re not complicating our theory in arbitrary or unnecessary ways.
Knowing at the outset that storms are complex doesn’t mean that we shouldn’t try to construct very simple predictive and descriptive models of weather systems, and see how close our simulation comes to getting it right. Once we have a basically right model, we can then work on incrementally increasing its precision. As for storms, so for norms. The analogy is particularly appropriate because in both cases we seek an approximation not only as a first step in a truth-seeking research program, but also as a behavior-guiding heuristic for making real-life decisions under uncertainty.
If I am sure that value is complex and I am given two theories, one of which is complex and the other simple, then I can be sure that the simple one is wrong. The other one is merely probably wrong (as most such theories are). “Too simple” is a valid objection if the premise “Not simple” is implied.
That’s assuming the two theories are being treated as perfected Grand Unified Theories Of The Phenomenon. If that’s the case, then yes, you can simply dismiss a purported Finished Product that is too simple, without even bothering to check on how accurate it is first. But we’re talking about preliminary hypotheses and approximate models here. If your first guess adds arbitrary complications just to try to look more like you think the Final Theory will someday appear, you won’t learn as much from the areas where your map fails. ‘Value is complex’ is compatible with the utility of starting with simple models, particularly since we don’t yet know in what respects it is complex.
Obviously that’s not what I’m suggesting. What I’m suggesting is that it’s both more complicated and that this complication is justified from my perspective because it captures my moral intuitions better.
What data?
Then again, the same applies to scientific theories, so long as the old now-falsified theory is a good approximation to the new currently accepted theory within certain ranges of conditions (e.g. classical Newtonian physics if you’re much bigger than an atom and much slower than light).
Isn’t a quasi-Aristotelian notion of the accidental/essential or contingent/necessary properties of different species a rather metaphysically fragile foundation for you to base your entire ethical system on? We don’t know whether the unconscious / conscious distinction will end up being problematized by future research, but we do already know that the distinctions between taxonomical groupings can be very fuzzy—and are likely to become far fuzzier as we take more control of our genetic future. We also know that what’s normal for a certain species can vary wildly over historical time. ‘In principle’ we could provide fish with a neural prosthesis that makes them capable of socializing productively with humans, but because our prototype of a fish is dumb, while our prototype of a human is smart, we think of smart fish and dumb humans as aberrant deviations from the telos (proper function) of the species.
It seems damningly arbitrary to me. Why should torturing sentient beings be OK in contexts where the technology for improvement is (or ‘feels’?) distant, yet completely intolerable in contexts where this external technology is more ‘near’ on some metric, even if in both cases there is never any realistic prospect of the technology being deployed here?
I don’t find it implausible that we currently use prototypes as a quick-and-dirty approximation, but I do find it implausible that on reflection, our more educated and careful selves would continue to found the human enterprise on essentialism of this particular sort.
Actually, now that you bring it up, I’m surprised by how similar the two are. ‘Heuristics’ by their very nature are approximations; if we compare them to scientific models that likewise approximate a phenomenon, we see in both cases that an occasional error is permissible. My objection to the ‘only things that can intelligently socialize with humans matter’ heuristic isn’t that it gets things wrong occasionally; it’s that it almost always yields the intuitively wrong answer, and when it gets the right answer it seems to do so for overdetermined reasons. E.g., it gets the right answer in cases of ordinary human suffering and preference satisfaction.
I agree that someone who claims an unrealistic level of confidence in a moral claim as an individual deserves less trust. But that’s different from claiming that it’s an advantage of a moral claim that it gets the right answer less often.
I just meant a stable, cooperative social group. Is there something specific about human societies that you think is the source of their unique moral status?
If we can’t trade with them for some reason, it’s still not OK to torture them.
‘The psychological unity of mankind’ is question-begging here. It’s just a catchphrase; it’s not as though there’s some scientific law that all and only biologically human minds form a natural kind. If we’re having a battle of catchphrases, vegetarians can simply appeal to the ‘psychological unity of sentient beings’.
Sure, they’re less unified, but how do we decide how unified a unity has to be? While you dismiss the psychological unity of sentient beings as too generic to be morally relevant, the parochialist can step up to dismiss the psychological unity of mankind as too generic to be morally relevant, preferring instead to favor only the humans with a certain personality type, or a certain cultural background, or a certain ideology. What I’m looking for is a reason to favor the one unity over an infinite number of rival unities.
I should also reiterate that it’s not an advantage of your theory that it requires two independent principles (‘being biologically human’, ‘being able to (be modified without too much difficulty into something that can) socialize with biological humans’) to explain phenomena that other models can handle with only a single generalization. Noting that value is complex is enough to show that your model is possible, but it’s not enough to elevate it to a large probability.
I don’t think most fish have complicated enough minds for this to be true. (By contrast, I think dolphins might, and this might be a reason to care about dolphins.)
You’re still using a methodology that I think is suspect here. I don’t think there’s good reasons to expect “everything that feels pain has moral value, period” to be a better moral heuristic than “some complicated set of conditions singles out the things that have moral value” if, upon reflection, those conditions seem to be in agreement with what my System 1 is telling me I actually care about (namely, as far as I can tell, my System 1 cares about humans in comas but not fish). My System 2 can try to explain what my System 1 cares about, but if those explanations are bad because your System 2 can find implications they have which are bad, then oh well: at the end of the day, as far as I can tell, System 1 is where my moral intuitions come from, not System 2.
Your intuition, not mine.
System 1 doesn’t know what a biological human is. I’m not using “human” to mean “biological human.” I’m using “human” to mean “potential friend.” Posthumans and sufficiently intelligent AI could also fall in this category, but I’m still pretty sure that fish don’t. I actually only care about the second principle.
While getting what I regard to be the wrong answers with respect to most animals. A huge difference between morality and science is that the results of properly done scientific experiments can be relatively clear: it can be clear to all observers that the experiment provides evidence for or against some theory. Morality lacks an analogous notion of moral experiment. (We wouldn’t be having this conversation if there were such a thing as a moral experiment; I’d be happy to defer to the evidence in that case, the same as I would in any scientific field where I’m not a domain expert.)
Thanks for fleshing out your view more! It’s likely that previously I was being a bit too finicky with how you were formulating your view; I wanted to hear you come out and express the intuition more generally so I could see exactly where you thought the discontinuity lay, and I think you’ve done a good job of that now. Any more precision would probably be misleading, since the intuition itself is a bit amorphous: A lot of people think of their pets as friends and companions in various ways, and it’s likely that no simple well-defined list of traits would provide a crisp criterion for what ‘friendship’ or ‘potential friendship’ means to you. It’s just a vague sense that morality is contingent on membership in a class of (rough) social equals, partners, etc. There is no room in morality for a hierarchy of interests — everything either deserves (roughly) all the rights, or none of them at all.
The reliance on especially poorly-defined and essentializing categories bothers me, but I’ll mostly set that aside. I think the deeper issue here is that our intuitions do allow for hierarchies, and for a more fine-grained distribution of rights based on the different faculties of organisms. It’s not all-or-nothing.
Allowing that it’s not all-or-nothing lets us escape most of your view’s problems with essentialism and ad-hoc groupings — we can allow that there is a continuum of different moral statuses across individual humans for the same reasons, and in the same ways, that there is a continuum across species. For instance, if it were an essential fact that our species divided into castes, one of which just couldn’t be a ‘friend’ or socialize with the other — a caste with permanent infant-like minds, for instance — we wouldn’t be forced into saying that this caste either has 100% of our moral standing, or 0%. Thinking in terms of a graded scale of moral responsibility gives us the flexibility needed to adapt to an unpredictable environment that frequently lacks sharp lines between biological kinds. And once we admit such a scale, it becomes much harder to believe that the scale ends completely at a level above the most intelligent and suffering-prone non-human, as opposed to slowly trailing off into the phylogenetic distance.
On your view, the reason — at a deep level, the only reason — that it’s the least bit wrong to torture infants or invalids arbitrarily intensely for arbitrarily long periods of time, is that when we think of infants and invalids, we imagine them as (near) potential or (near) would-be ‘friends’ of ours. That is, I see a baby injured and in pain and I feel morally relevant sympathy (as opposed to, say, the confused, morally delusive sympathy some people feel toward dogs or dolphins) only because I imagine that the baby might someday grow up and, say, sell staplers to me, share secrets with me, or get me a job as a server at Olive Garden. Even if the child has some congenital disease that makes its premature death a physical certainty, still the shadow of its capacities to socialize with me, cast from nearby possible worlds (or just from the ease with which we could imagine the baby changing to carry on business transactions with me), confers moral significance upon its plight.
I just don’t think that’s so. Our sympathy toward infants doesn’t depend on a folk theory of human development. We’d feel the same sympathy, or at least something relevantly similar, even if we’d been raised in a bubble and never been told that infants are members of the human species at all prior to encountering one. If you knew all the physiological and psychological facts about infants except ones that showed they tend to develop into intelligent socializers, you’d already have plenty of good reason not to torture infants. Learning that infants develop into adult humans might add new (perhaps even better) reasons to not torture them, but it wouldn’t constitute the only such reasons. But in that case the sympathy we feel for animals with infant-like neural and psychological traits, but that never develop into active participants in complex language-using societies, would be morally relevant for the same reasons.
We’ve been speaking as though my view of morality were the simple one, yours relatively complex and rich. But in fact I think your reluctance to say ‘it’s bad when chickens suffer’ results from an attempt to oversimplify and streamline your own ethical reasoning, privileging one set of moral intuitions without providing a positive argument for the incoherence or illegitimacy of the others. I think it’s entirely possible that some of our moral intuitions are specific to beings like adult humans (e.g., intuitions of reciprocity, fairness, and authority), while others generalize to varying degrees to non-humans (e.g., intuitions of suffering and gain). Because different moral thoughts and emotions can be triggered by radically different stimuli, we shouldn’t be surprised that some of those stimuli require an assumption of intelligence (hence, if applied to chickens, plausibly depend on excessive anthopomorphization), while others require very different assumptions. In a slogan: I think that in addition to moral relationships of friendship, kinship, alliance and collaboration, there are partly autonomous relationships of caregiving that presuppose quite different faculties.
Maybe, evolutionarily, we feel sympathy for non-humans only as a side-effect of traits selected because they help us cooperate with other humans. But that then is just a historical fact; it has no bearing on the legitimacy, or the reality, of our sympathy for suffering non-humans. What’s really needed to justify carnivory is a proof that this sympathy presupposes an ascription of properties to non-humans that they in fact lack, e.g., sentience. Our intuitions about the moral relevance of chickens would then go the way of moral intuitions about ghosts and gods. But I haven’t seen a case like this made yet; the claim has not been that animals don’t suffer (or that their ‘suffering’ is so radically unlike human suffering that it loses its moral valence). Rather, the claim has been that if brutes do suffer, that doesn’t matter (except insofar as it also causes human suffering).
We both think moral value is extremely complicated; but I think it’s relatively disjunctive, whereas you think it’s relatively conjunctive. I’d be interested to hear arguments for why we should consider it conjunctive, but I still think it’s important that to the extent we’re likely to be in error at all, we err on the side of privileging disjunctivity.
This is a good point. I’ll have to think about this.
This is quite a good post, thanks for taking the time to write it. You’ve said before that you think vegetarianism is the morally superior option. While you’ve done a good job here of defending the coherence or possibility of the moral significance of animal suffering, would you be willing to go so far as to defend such moral significance simpliciter?
I ask in part because I don’t think the claim that we ought to err on the side of disjunctivity as I think you construe it (where this involves something like a proportional distribution of moral worth on the basis of a variety of different merits and relationships) is morally safer than operating as if there were a hard and flat moral floor. Operating on your basis we might be less likely to exclude from moral consideration those that ought to be included, but we will be more likely to distribute moral value unevenly where it should be evenly distributed. We’ve historically had both problems, and I don’t know that one or the other is necessarily the more disastrous. Exclusion has led to some real moral abominations (the holocaust, I guess), but uneven distribution where even distribution is called for has led to some long-standing and terribly unjust political traditions (feudalism, say).
EDIT: I should add, and not at all by way of criticism, that for all the pejorative aimed at Aristotelian thinking in this last exchange, your conclusion (excluding the safety bit) is strikingly Aristotelian.
Thanks, hen! My primary argument is indeed that if animals suffer, that is morally significant — not that this thesis is coherent or possible, but that it’s true. My claim is that although humans are capable both of suffering and of socializing, and both of these have ethical import, the import of suffering is not completely dependent on the import of socializing, but has some valence in its own right. This allows us to generalize the undesirability of suffering both to sapient nonsocial sentient beings and to nonsapient nonsocial sentient beings, independent of whether they would be easy, hard, or impossible to modify into a social being.
It’s hard to talk about this in the abstract, so maybe you should say more about what you’re worried about, and (ideally) about some alternative that avoids the problem. It sounds like you’re suggesting that if we assert that humans have a richer set of rights than non-humans — if we allow value to admit of many degrees and multiple kinds — then we may end up saying that some groups of humans intrinsically deserve more rights than others, in a non-meritocratic way. Is that your worry?
Thanks for filling that out. Could I ask you to continue with a defense of this premise in particular? (You may have done this already, and I may have missed it. If so, I’d be happy to be pointed in the right direction).
My worry is with both meritocratic and non-meritocratic unevenness. You said earlier that Qiaochu’s motivation for excluding animals from moral consideration was based on a desire for simplicity. I think this is right, but could use a more positive formulation: I think on the whole people want this simplicity because they want to defend the extremely potent modern intuition that moral hierarchy is unqualifiedly wrong . At least part of this idea is to leave our moral view fully determined by our understanding of humanity: we owe to every human (or relevantly human-like thing) the moral consideration we take ourselves to be owed. Most vegetarians, I would think, deploy such a flat moral floor (at sentience) for defending the rights of animals.
So one view Qiaochu was attacking (I think) by talking about the complexity of value is the view that something so basic as sentience could be the foundation for our moral floor. Your response was not to argue for sentience as such a basis, but to deny the moral floor in favor of a moral stairway, thereby eliminating the absurdity of regarding chickens as full fledged people.
The reason this might be worrying is that our understanding of what it is to be human, or what kinds of things are morally valuable now fails to determine our ascription of moral worth. So we admit the possibility of distributing moral worth according to intelligence, strength, military power, wealth, health, beauty, etc. and thereby denying to many people who fall short in these ways the moral significance we generally think they’re owed. It was a view very much along these lines that led Aristotle to posit that some human beings, incapable of serious moral achievement for social or biological reasons, were natural slaves. He did not say they were morally insignificant, mind, just that given their capacities slavery was the best they could do.
I’m not saying you’re committed to any kind of moral oligarchy, only that because this kind of disjunctive strategy eschews the direct and solitary link between humanity and moral value, it cannot be called the safer option without further ado. A society in error could do as much damage proceeding by your disjunctive rule (by messing up the distributions) as they could proceeding with a conjunctive rule (by messing up who counts as ‘human’).
An alternative might be to say that there is moral value proper, that every human being (or relevantly human-like thing) has, and then there are a variety of defective or subordinate forms of moral significance depending on how something is related to that moral core. This way, you’d keep the hard moral floor, but you’d be able to argue for the (non-intrinsic) moral value of non-humans. (Unfortunately, this alternative also deploys an Aristotelian idea: core dependent homonomy.)
The term you are looking for here is ‘person’. The debate you are currently having is about what creatures are persons.
The following definitions aid clarity in this discussion:
Animal—a particular form of life that has evolved on earth; most animals are mobile, multicellular, and respond to their environment (but this is not universally necessary or sufficient).
Human—a member of the species Homo sapiens, a particular type of hairless ape
Person—A being which has recognized agency, and (in many moral systems) specific rights.
Note that separating ‘person’ from ‘human’ allows you to recognize the possibility that all humans are not necessarily persons in all moral systems (i.e.: apartheid regimes and ethnic cleansing schemas certainly treat many humans as non-persons; certain cultures treat certain genders as effectively non-persons, etc.). If this is uncomfortable for you, explore the edges of it until your morality restabilizes (example: brain-dead humans are still human, but are they persons?).
Just keep adding complexity until you get an intelligent socializer. If an AI can be built, and prosthetics can be built, then a prosthetic that confers intelligence upon another system can be built. At worst, the fish brain would just play an especially small or especially indirect causal role in the rest of the brain’s functioning.
You are deferring to evidence; I just haven’t given you good evidence yet that you do indeed feel sympathy for non-human animals (e.g., I haven’t bombarded you with videos of tormented non-humans; I can do so if you wish), nor that you’re some sort of exotic fish-sociopath in this regard. If you thought evidence had no bearing on your current moral sentiments, then you wouldn’t be asking me for arguments at all. However, because we’re primarily trying to figure out our own psychological states, a lot of the initial evidence is introspective—we’re experimenting on our own judgments, testing out different frameworks and seeing how close they come to our actual values. (Cf. A Priori.)
But in that case I would be tempted to ascribe moral value to the prosthetic, not the fish.
Agreed, but this is why I think the analogy to science is inappropriate.
I doubt there will always be a fact of the matter about where an organism ends and its prosthesis begins. My original point here was that we can imagine a graded scale of increasingly human-socialization-capable organisms, and it seems unlikely that Nature will be so kind as to provide us with a sharp line between the Easy-To-Make-Social and the Hard-To-Make-Social. We can make that point by positing prosthetic enhancements of increasing complexity, or genetic modifications to fish brain development, or whatever you please.
Fair enough! I don’t have a settled view on how much moral evidence should be introspective v. intersubjective, as long as we agree that it’s broadly empirical.
With respect to this human-socialization-as-arbiter-of-moral-weight idea, are you endorsing the threshold which human socialization currently demonstrates as the important threshold, or the threshold which human socialization demonstrates at any given moment?
For example, suppose species X is on the wrong side of that line (however fuzzy the line might be). If instead of altering Xes so they were better able to socialize with unaltered humans and thereby had, on this view, increased moral weight, I had the ability to increase my own ability to socialize with X, would that amount to the same thing?
Thinking about this… while I sympathize with the temptation, it does seem to me that the same mindset that leads me in this direction also leads me to ascribe moral values to human societies, rather than to individual humans.
I’m not yet sure what I want to do with that.
It might be worth distinguishing a genetic condition on X from a constituting condition on X. So human society is certainly necessary to bring about the sapience and social capacities of human beings, but if you remove the human from the society once they’ve been brought up in the relevant way, they’re no less capable of social and sapient behavior.
On the other hand, the fish-prosthetic is part of what constitutes the fish’s capacity for social and sapient behavior. If the fish were removed from it, it would loose those capacities.
I think you could plausibly say that the prosthetic should be considered part of the basis for the moral worth of the fish (at the expense of the fish on its own), but refuse to say this about human societies (at the expense of individual human) in light of this distinction.
Hm.
Well, I agree with considering the prosthetic part of the basis of the worth of the prosthetically augmented fish, as you suggest.
And while I think we underestimate the importance of a continuing social framework for humans to be what we are, even as adults, I will agree that there’s some kind of meaningful threshold to be identified such that I can be removed from human society without immediately dropping below that threshold, and there’s an important difference (if perhaps not strictly a qualitative one) between me and the fish in this respect.
So, yeah, drawing this distinction allows me to ascribe moral value to individual adult humans (though not to very young children, I suppose), rather than entirely to their societies, even while embracing the general principle here.
Fair enough.
I’ve seen that C.S. Lewis quote before, and it seems to me quite mistaken. In this part:
Lewis seems to suggest that executing a witch, per se, is what we consider bad. But that’s wrong. What was bad about witch hunts was:
People were executed without anything resembling solid evidence of their guilt — which of course could not possibly have been obtained, seeing as how they were not guilty and the crimes they were accused of were imaginary; but my point is that the “trial” process was horrifically unjust and monstrously inhumane (torture to extract confessions, etc.). If witches existed today, and if we believed witches existed today, we would still (one should hope!) give them fair trials, convict only on the strength of proof beyond a reasonable doubt, accord the accused all the requisite rights, etc.
Punishments were terribly inhumane — burning alive? Come now. Even if we thought witches existed today, and even if we thought the death penalty was an appropriate punishment, we’d carry it out in a more humane manner, and certainly not as a form of public entertainment (again, one would hope; at least, our moral standards today dictate thus).
So differences of factual belief are not the main issue here. The fact that, when you apply rigorous standards of evidence and fair prosecution practices to the witch issue, witchcraft disappears as a crime, is instructive (i.e. it indicates that there’s no such crime in the first place), but we shouldn’t therefore conclude that not believing in witches is the relevant difference between us and the Inquisition.
Considering people seemed to think that this was the best way to find witches, 1 still seems like a factual confusion.
2 was based on a Bible quote, I think. The state hanged witches.
We would? That seems incredibly dangerous. Who knows what kind of things a real witch could do to a jury?
If you think humanity as a whole has made substantial moral progress throughout history, what’s driven this moral progress? I can tell a story about what drives factual progress (the scientific method, improved technology) but I don’t have an analogous story about moral progress. How do you distinguish the current state of affairs from “moral fashion is a random walk, so of course any given era thinks that past eras were terribly immoral”?
Who knows what kind of things a real witch could do to an executioner, for that matter?
There is a difference between “we should take precautions to make sure the witch doesn’t blanket the courtroom with fireballs or charm the jury and all officers of the court; but otherwise human rights apply as usual” and “let’s just burn anyone that anyone has claimed to be a witch, without making any attempt to verify those claims, confirm guilt, etc.” Regardless of what you think would happen in practice (fear makes people do all sorts of things), it’s clear that our current moral standards dictate behavior much closer to the former end of that spectrum. At the absolute least, we would want to be sure that we are executing the actual witches (because every accused person could be innocent and the real witches could be escaping justice), and, for that matter, that we’re not imagining the whole witchcraft thing to begin with! That sort of certainty requires proper investigative and trial procedures.
That’s two questions (“what drives moral progress” and “how can you distinguish moral progress from a random walk”). They’re both interesting, but the former is not particularly relevant to the current discussion. (It’s an interesting question, however, and Yvain makes some convincing arguments at his blog [sorry, don’t have link to specific posts atm] that it’s technological advancement that drives what we think of as “moral progress”.)
As for how I can distinguish it from a random walk — that’s harder. However, my objection was to Lewis’s assessment of what constitutes the substantive difference between our moral standards and those of medieval witch hunters, which I think is totally mistaken. I do not need even to claim that we’ve made moral progress per se to make my objection.
No they don’t. Are you saying it’s not possible to construct a mind for which pain and suffering are not bad? Or are you defining pain and suffering as bad things? In that case, I can respond the neural correlates of human pain and human suffering might not be bad when implemented in brains that differ from human brains in certain relevant ways (Edit: and would therefore not actually qualify as pain and suffering under your new definition).
There’s a difference between “it’s possible to construct a mind” and “other particular minds are likely to be constructed a certain way.” Our minds were build by the same forces that built other minds we know of. We should expect there to be similarities.
(I also would define it, not in terms of “pain and suffering” but “preference satisfaction and dissatisfaction”. I think I might consider “suffering” as dissatisfaction, by definition, although “pain” is more specific and might be valuable for some minds.)
Such as human masochists.
I agree that expecting similarities is reasonable (although which similarities, and to what extent, is the key followup question). I was objecting to the assertion of (logical?) necessity, especially since we don’t even have so much as a strong certainty.
I don’t know that I’m comfortable with identifying “suffering” with “preference dissatisfaction” (btw, do you mean by this “failure to satisfy preferences” or “antisatisfaction of negative preferences”? i.e. if I like playing video games and I don’t get to play video games, am I suffering? Or am I only suffering if I am having experiences which I explicitly dislike, rather than simply an absence of experiences I like? Or do you claim those are the same thing?).
I can’t speak for Raemon, but I would certainly say that the condition described by “I like playing video games and am prohibited from playing video games” is a trivial but valid instance of the category /suffering/.
Is the difficulty that there’s a different word you’d prefer to use to refer to the category I’m nodding in the direction of, or that you think the category itself is meaningless, or that you don’t understand what the category is (reasonably enough; I haven’t provided nearly enough information to identify it if the word “suffering” doesn’t reliably do so) , or something else?
I’m usually indifferent to semantics, so if you’d prefer a different word, I’m happy to use whatever word you like when discussing the category with you.
That one. Also, what term we should use for what categories of things and whether I know what you’re talking about is dependent on what claims are being made… I was objecting to Zack_M_Davis’s claim, which I take to be something either like:
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. If we take that implementation and put it in another kind of brain (alternatively: if we find some other kind of brain where the same or similar implementation is present), then this brain is also necessarily having the same experiences, and we should consider them to be bad also.”
or...
“We humans have categories of experiences called ‘pain’ and ‘suffering’, which we consider to be bad. These things are implemented in our brains somehow. We can sensibly define these phenomena in an implementation-independent way, then if any other kind of brain implements these phenomena in some way that fits our defined category, we should consider them to be bad also.”
I don’t think either of those claims are justified. Do you think they are? If you do, I guess we’ll have to work out what you’re referring to when you say “suffering”, and whether that category is relevant to the above issue. (For the record, I, too, am less interested in semantics than in figuring out what we’re referring to.)
There are a lot of ill-defined terms in those claims, and depending on how I define them I either do or don’t. So let me back up a little.
Suppose I prefer that brain B1 not be in state S1.
Call C my confidence that state S2 of brain B2 is in important ways similar to B1 in S1.
The higher C is, the more confident I am that I prefer B2 not be in S2. The lower C is, the less confident I am.
So if you mean taking the implementation of pain and suffering (S1) from our brains (B1) and putting/finding them or similar (C is high) implementations (S2) in other brains (B2), then yes, I think that if (S1) pain and suffering are bad (I antiprefer them) for us (B1), that’s strong but not overwhelming evidence that (S2) pain and suffering are bad (I antiprefer them) for others (B2).
I don’t actually think understanding more clearly what we mean by pain and suffering (either S1 or S2) is particularly important here. I think the important term is C.
As long as C is high—that is, as long as we really are confident that the other brain has a “same or similar implementation”, as you say, along salient dimensions (such as manifesting similar subjective experience) -- then I’m pretty comfortable saying I prefer the other brain not experience pain and suffering. And if (S2,B2) is “completely identical” to (S1,B1), I’m “certain” I prefer B2 not be in S2.
But I’m not sure that’s actually what you mean when you say “same or similar implementation.” You might, for example, mean that they have anatomical points of correspondance, but you aren’t confident that they manifest similar experience, or something else along those lines. In which case C gets lower, and I become uncertain about my preferences with respect to (B2,S2).
Is brain B1 your brain in this scenario? Or just… some brain? I ask because I think the relevant question is whether the person whose brain it is prefers that brain Bx be or not be in state Sx, and we need to first answer that, and only then move on to what our preferences are w.r.t. other beings’ brain states.
Anyway, it seemed to me like the claim that Zack_M_Davis was making was about the case where certain neural correlates (or other sorts of implementation details) of what we experience as “pain” and “suffering” (which, for us, might usefully be operationalized as “brain states we prefer not to be in”) are found in other life-forms, and we thus conclude that a) these beings are therefore also experiencing “pain” and “suffering” (i.e. are having the same subjective experiences), and b) that these beings, also, have antipreferences about those brain states...
Those conclusions are not entailed by the premises. We might expect them to be true for evolutionarily related life-forms, but my objection was to the claim of necessity.
Or, he could have been making the claim that we can usefully describe the category of “pain” and/or “suffering” in ways that do not depend on neural correlates or other implementation details (perhaps this would be a functional description of some sort, or a phenomenological one; I don’t know), and that if we then discover phenomena matching that category in other life-forms, we should conclude that they are bad.
I don’t think that conclusion is justified either… or rather, I don’t think it’s instructive. For instance, Alien Species X might have brain states that they prefer not to be in, but their subjective experience associated with those brain states bears no resemblance in any way to anything that we humans experience as pain or suffering: not phenomenologically, not culturally, not neurally, etc. The only justification for referring to these brain states as “suffering” is by definition. And we all know that arguing “by definition” makes a def out of I and… wait… hm… well, it’s bad.
My brain is certainly an example of a brain that I prefer not be in pain, though not the only example.
My confidence that brain B manifests a mind that experiences pain and suffering given certain implementation (or functional, or phenomenological,or whatever) details depends a lot on those details. As does my confidence that B’s mind antiprefers the experiential correlates of those details. I agree that there’s no strict entailment here, though, “merely” evidence.
That said, mere evidence can get us pretty far. I am not inclined to dismiss it.
I’d do it that way. It doesn’t strike me as morally urgent to prevent people with pain asymbolia from experiencing the sensation of “pain”. (Subjects report that they notice the sensation of pain, but they claim it doesn’t bother them.) I’d define suffering as wanting to get out of the state you’re in. If you’re fine with the state you’re in, it is not what I consider to be suffering.
Ok, that seems workable to a first approximation.
So, a question for anyone who both agrees with that formulation and thinks that “we should care about the suffering of animals” (or some similar view):
Do you think that animals can “want to get out of the state they’re in”?
Yes?
This varies from animal to animal. There’s a fair amount of research/examination into which animals appear to do so, some of which is linked to elsewhere in this discussion. (At least some examination was linked to in response to a statement about fish)
On why the suffering of one species would be more important than the suffering of another:
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
Wow! So you’ve solved friendly AI? Eliezer will be happy to hear that.
I’m pretty sure Eliezer already knew our brains contained the basis of morality.
I should probably clarify—when I said that valuing humans over animals strikes me as arbitrary, I’m saying that it’s arbitrary within the context of my personal moral framework, which contains no axioms from which such a distinction could be derived. All morality is ultimately arbitrary and unjustified, so that’s not really an argument for or against any moral system. Internal inconsistencies could be arguments, if you value consistency, but your system does seem internally consistent. My original comment was meant more of an explanation of my initial reaction to your question rather than anything that would be convincing on logical grounds, though I did also assign some probability to it possibly being convincing on non-logical grounds. (Our moral axioms are influenced by what other people think, and somebody expressing their disagreement with a moral position has some chance of weakening another person’s belief in that position, regardless of whether that effect is “logical”.)
I’ve been meaning to write a post about how I think it’s a really, really bad idea to think about morality in terms of axioms. This seems to be a surprisingly (to me) common habit among LW types, especially since I would have thought it was a habit the metaethics sequence would have stomped out.
(You shouldn’t regard it as a strength of your moral framework that it can’t distinguish humans from non-human animals. That’s evidence that it isn’t capable of capturing complexity of value.)
I agree that thinking about morality exclusively in terms of axioms in a system of classical logical system is likely to be a rather bad idea, since that makes one underestimate the complexity of morality, the strength of non-logical influences, and the extent to which it resembles a system of classical logic in general. But I’m not sure if it’s that problematic as long as you keep in mind that “axioms” is really just shorthand for something like “moral subprograms” or “moral dynamics”.
I did always read the metaethics sequence as establishing the existence of something similar-enough-to-axioms-that-we-might-as-well-use-the-term-axioms-as-shorthand-for-them, with e.g. No Universally Compelling Arguments and Created Already In Motion arguing that you cannot convince a mind about the correctness of some action unless its mind contains a dynamic which reacts to your argument in the way you wish—in other words, unless your argument builds on things that the mind’s decision-making system already cares about, and which could be described as axioms when composing a (static) summary of the mind’s preferences.
I’m not really sure of what you mean here. For one, I didn’t say that my moral framework can’t distinguish humans and non-humans—I do e.g. take a much more negative stance on killing humans than animals, because killing humans would have a destabilizing effect on society and people’s feelings of safety, which would contribute to the creation of much more suffering than killing animals would.
Also, whether or not my personal moral framework can capture complexity of value seems irrelevant—CoV is just the empirical thesis that people in general tend to care about a lot of complex things. My personal consciously-held morals are what I currently want to consciously focus on, not a description of what others want, nor something that I’d program into an AI.
Well, I don’t think I should care what I care about. The important thing is what’s right, and my emotions are only relevant to the extent that they communicate facts about what’s right. What’s right is too complex, both in definition and consequentialist implications, and neither my emotions nor my reasoned decisions are capable of accurately capturing it. Any consciously-held morals are only a vague map of morality, not morality itself, and so shouldn’t hold too much import, on pain of moral wireheading/acceptance of a fake utility function.
(Listening to moral intuitions, possibly distilled as moral principles, might give the best moral advice that’s available in practice, but that doesn’t mean that the advice is any good. Observing this advice might fail to give an adequate picture of the subject matter.)
I must be misunderstanding this comment somehow? One still needs to decide what actions to take during every waking moment of their lives, and “in deciding what to do, don’t pay attention to what you want” isn’t very useful advice. (It also makes any kind of instrumental rationality impossible.)
What you want provides some information about what is right, so you do pay attention. When making decisions, you can further make use of moral principles not based on what you want at a particular moment. In both cases, making use of these signals doesn’t mean that you expect them to be accurate, they are just the best you have available in practice.
Estimate of the accuracy of the moral intuitions/principles translates into an estimate of value of information about morality. Overestimation of accuracy would lead to excessive exploitation, while an expectation of inaccuracy argues for valuing research about morality comparatively more than pursuit of moral-in-current-estimation actions.
I’m not a very well educated person in this field, but if I may:
I see my various squishy feelings (desires and what-is-right intuitions are in this list) as loyal pets. Sometimes, they must be disciplined and treated with suspicion, but for the most part, they are there to please you in their own dumb way. They’re no more enemies than one’s preference for foods. In my care for them, I train and reward them, not try to destroy or ignore them. Without them, I have no need to DO better among other people, because I would not be human—that is, some things are important only because I’m a barely intelligent ape-man, and they should STAY important as long as I remain a barely intelligent ape-man. Ignoring something going on in one’s mind, even when one KNOWS it is wrong, can be a source of pain, I’ve found—hypocrisy and indecision are not my friends.
Hope I didn’t make a mess of things with this comment.
I’m roughly in agreement, though I would caution that the exploration/exploitation model is a problematic one to use in this context, for two reasons:
1) It implies a relatively clear map/territory split: there are our real values, and our conscious model of them, and errors in our conscious model do not influence the actual values. But to some extent, our conscious models of our values do shape our unconscious values in that direction—if someone switches to an exploitation phase “too early”, then over time, their values may actually shift over to what the person thought they were.
2) Exploration/exploitation also assumes that our true values correspond to something akin to an external reward function: if our model is mistaken, then the objectively correct thing to do would be to correct it. In other words, if we realize that our conscious values don’t match our unconscious ones, we should revise our conscious values. And sometimes this does happen. But on other occasions, what happens is that our conscious model has become installed as a separate and contradictory set of values, and we need to choose which of the values to endorse (in which situations). This happening is a bad thing if you tend to primarily endorse your unconscious values or a lack of internal conflict, but arguably a good thing if you tend to primarily endorse your conscious values.
The process of arriving at our ultimate values seems to be both an act of discovering them and an act of creating them, and we probably shouldn’t use terminology like exploration/exploitation that implies that it would be just one of those.
This is value drift. At any given time, you should fix (i.e. notice, as a concept) the implicit idealized values at that time and pursue them even if your hardware later changes and starts implying different values (in the sense where your dog or your computer or an alien also should (normatively) pursue them forever, they are just (descriptively) unlikely to, but you should plot to make that more likely, all else equal). As an analogy, if you are interested in solving different puzzles on different days, then the fact that you are no longer interested in solving yesterday’s puzzle doesn’t address the problem of solving yesterday’s puzzle. And idealized values don’t describe valuation of you, the abstract personal identity, of your actions and behavior and desires. They describe valuation of the whole world, including future you with value drift as a particular case that is not fundamentally special. The problem doesn’t change, even if the tendency to be interested in a particular problem does. The problem doesn’t get solved because you are no longer interested in it. Solving a new, different problem does not address the original problem.
The nature of idealized values is irrelevant to this point: whatever they are, they are that thing that they are, so that any “correction” discards the original problem statement and replaces it with a new one. What you can and should correct are intermediate conclusions. (Alternatively, we are arguing about definitions, and you read in my use of the term “values” what I would call intermediate conclusions, but then again I’m interested in you noticing the particular idea that I refer to with this term.)
I don’t think “unconscious values” is a good proxy for abstract implicit valuation of the universe, consciously-inaccessible processes in the brain are at a vastly different level of abstraction compared to the idealization I’m talking about.
This might be true in the sense that humans probably underdetermine the valuation of the world, so that there are some situations that our implicit preferences can’t compare even in principle. The choice between such situations is arbitrary according to our values. Or our values might just recursively determine the correct choice in every single definable distinction. Any other kind of “creation” will contradict the implicit answer, and so even if it is the correct thing to do given the information available at the time, later reflection can show it to be suboptimal.
(More constructively, the proper place for creativity is in solving problems, not in choosing a supergoal. The intuition is confused on this point, because humans never saw a supergoal, all sane goals that we formulate for ourselves are in one way or another motivated by other considerations, they are themselves solutions to different problems. Thus, creativity is helpful in solving those different problems in order to recognize which new goals are motivated. But this is experience about subgoals, not idealized supergoals.)
I think that the concept of idealized value is obviously important in an FAI context, since we need some way of formalizing “what we want” in order to have any way of ensuring that an AI will further the things we want. I do not understand why the concept would be in relevant to our personal lives, however.
The question of what is normatively the right thing to do (given the resources available) is the same for a FAI and in our personal lives. My understanding is that “implicit idealized value” is the shape of the correct answer to it, not just a tool restricted to the context of FAI. It might be hard for a human to proceed from this concept to concrete decisions, but this is a practical difficulty, not a restriction on the scope of applicability of the idea. (And to see how much of a practical difficulty it is, it is necessary to actually attempt to resolve it.)
If idealized value indicates the correct shape of normativity, the question should instead be, How are our personal lives relevant to idealized value? One way was discussed a couple of steps above in this conversation: exploitation/exploration tradeoff. In pursuit of idealized values, if in our personal lives we can’t get much information about them, a salient action is to perform/support research into idealized values (or relevant subproblems, such as preventing/evading global catastrophes).
What does this mean? It sounds like you’re talking about some kind of objective morality?
I’ve interacted with enough red-haired people and enough black-haired people that (assuming the anti-zombie principle) I’m somewhat confident that there’s no big difference in average between the ways they suffer . I’m nowhere near as confident about fish.
I already addressed that uncertainty in my comment:
To elaborate: it’s perfectly reasonable to discount the suffering of e.g. fish by some factor because one thinks that fish probably suffer less. But as I read it, someone who says “human suffering is more important” isn’t saying that: they’re saying that they wouldn’t care about animal suffering even if it was certain that animals suffered just as much as humans, or even if it was certain that animals suffered more than humans. It’s saying that no matter the intensity or nature of the suffering, only suffering that comes from humans counts.
Even less so about silverfish, despite its complex mating rituals.
Human suffering might be orders of magnitude more important. (Though: what reason do you have in mind for this?) But non-human animal suffering is likely to be orders of magnitude more common. Some non-human animals are probably capable of suffering, and we care a great deal about suffering in the case of humans (as, presumably, we would in the case of intelligent aliens). So it seems arbitrary to exclude non-human animal suffering from our concerns completely. Moreover, if you’re uncertain about whether animals suffer, you should err on the side of assuming that they do because this is the safer assumption. Mistakenly killing thousands of suffering moral patients over your lifetime is plausibly a much bigger worry than mistakenly sparing thousands of unconscious zombies and missing out on some mouth-pleasures.
I’m not a vegetarian myself, but I do think vegetarianism is a morally superior option. I also think vegetarians should adopt a general policy of not paying people to become vegetarians (except perhaps as a short-term experiment, to incentivize trying out the lifestyle).
I’m a human and I care about humans. Animals only matter insofar as they affect the lives of humans. Is this really such a difficult concept?
I don’t mean per organism, I mean in aggregate. In aggregate, I think the totality of animal suffering is orders of magnitude less important than the totality of human suffering.
I’m not disagreeing that animals suffer. I’m telling you that I don’t care whether they suffer.
You are many things: a physical object, a living being, a mammal, a member of the species Homo sapiens, an East Asian (I believe), etc. What’s so special about the particular category you picked?
The psychological unity of humankind. See also this comment.
Presumably mammals also exhibit more psychological similarity than non-mammals, and the same is probably true about East Asians relative to members of other races. What makes the psychological unity of mankind special?
Moreover, it seems that insofar as you care about humans because they have certain psychological traits, you should care about any creature that has those traits. Since many animals have many of the traits that humans have, and some animals have those traits to a greater degree than some humans do, it seems you should care about at least some nonhuman animals.
I’m willing to entertain this possibility. I’ve recently been convinced that I should consider caring about dolphins and other similarly intelligent animals, possibly including pigs (so I might be willing to give up pork). I still don’t care about fish or chickens. I don’t think I can have a meaningful relationship with a fish or a chicken even in principle.
I suspect that if you plotted all living beings by psychological similarity with Qiaochu_Yuan, there would be a much bigger gap between the -- [reminds himself about small children, people with advanced-stage Alzheimer’s, etc.] never mind.
:-)
(I could steelman my yesterday self by noticing that even though small children aren’t similar to QY they can easily become so in the future, and by replacing “gap” with “sparsely populated region”.)
Doesn’t follow. If we imagine a personhood metric for animals evaluated over some reasonably large number of features, it might end up separating (most) humans from all nonhuman animals even if for each particular feature there exist some nonhuman animals that beat humans on it. There’s no law of ethics saying that the parameter space has to be small.
It’s not likely to be a clean separation, and there are almost certainly some exceptional specimens of H. sapiens that wouldn’t stand up to such a metric, but—although I can’t speak for Qiaochu—that’s a bullet I’m willing to bite.
Does not follow, since an equally valid conclusion is that Qiaochu_Yuan should not-care about some humans (those that exhibit relevant traits less than some nonhuman animals). One person’s modus ponens is etc.
Every human I know cares at least somewhat about animal suffering. We don’t like seeing chickens endlessly and horrifically tortured—and when we become vividly acquainted with such torture, our not-liking-it generally manifests as a desire for the torture to stop, not just as a desire to become ignorant that this is going on so it won’t disturb our peace of mind. I’ll need more information to see where the disanalogy is supposed to be between compassion for other species and compassion for other humans.
Are you certain you don’t care?
Are you certain that you won’t end up viewing this dispassion as a bias on your part, analogous to people in history who genuinely didn’t care at all about black people (but would regret and abandon this apathy if they knew all the facts)?
If you feel there’s any realistic chance you might discover that you do care in the future, you should again err strongly on the side of vegetarianism. Feeling a bit silly 20 years from now because you avoided torturing beings it turns out you don’t care about is a much smaller cost than learning 20 years from now you’re the hitler of cows. Vegetarianism accommodates meta-uncertainty about ethical systems better than its rivals do.
I don’t feel psychologically similar to a chicken in the same way that I feel psychologically similar to other humans.
No, or else I wouldn’t be asking for arguments.
This is a good point.
I don’t either, but unless I can come up with a sharp and universal criterion for distinguishing all chickens from all humans, chickens’ psychological alienness to me will seem a difference of degree more than of kind. It’s a lot easier to argue that chicken suffering matters less than human suffering (or to argue that chickens are zombies) than to argue that chicken suffering is completely morally irrelevant.
Some chickens may very well have more psychologically in common with me than I have in common with certain human infants or with certain brain-damaged humans; but I still find myself able to feel that sentient infants and disabled sentient humans oughtn’t be tortured. (And not just because I don’t want their cries to disturb my own peace of mind. Nor just because they could potentially become highly intelligent, through development or medical intervention. Those might enhance the moral standing of any of these organisms, but they don’t appear to exhaust it..)
That’s not a good point, that’s a variety of Pascal’s Mugging: you’re suggesting that the fact that the possible consequence is large (“I tortured beings” is a really negative thing) means that even fi the chance is small, you should act on that basis.
It’s not a variant of Pascal’s Mugging, because the chances aren’t vanishingly small and the payoff isn’t nearly infinite.
I don’t believe you. If you see someone torturing a cat, a dolphin or a monkey, would you feel nothing? (Suppose that they are not likely to switch to torturing humans, to avoid “gateway torture” complications.)
My problem with this question is that if I see video of someone torturing a cat when I am confident there was no actual cat-torturing involved in creating those images (e.g., I am confident it was all photoshopped), what I feel is pretty much indistinguishable from what I feel if I see video of someone torturing a cat when I am confident there was actual cat-torturing.
So I’m reluctant to treat what I feel in either case as expressing much of an opinion about suffering, since I feel it roughly equally when I believe suffering is present and when I don’t.
So if you can factor-out, so to speak, the actual animal suffering: If you had to choose between “watch that video, no animal was harmed” versus “watch that video, an animal was harmed, also you get a biscuit (not the food, the 100 squid (not the animals, the pounds (not the weight unit, the monetary unit)))”, which would you choose? (Your feelings would be the same, as you say, your decision probably wouldn’t be. Just checking.)
What?
A biscuit provides the same number of calories as 100 SQUID, which stands for Superconducting Quantum Interference Device, which weigh a pound apiece, which masses 453.6 grams, which converts to 4 10^16 joules, which can be converted into 1.13 10^10 kilowatt-hours, which are worth 12 cents per kW-hr, so around 136 billion dollars or so.
...plus a constant.
Reminds me of … Note the name of the website. She doesn’t look happy! “I am altering the deal. Pray I don’t alter it any further.”
Edit: Also, 1.13 * 10^10 kilowatt-hours at 12 cents each yields 1.36 billion dollars, not 136 billion dollars! An honest mistake (cents, not dollars per kWh), or a scam? And as soon as Dmitry is less active …
“squid” is slang for a GBP, i.e. Pound Sterling, although I’m more used to hearing the similar “quid.” One hundred of them can be referred to as a “biscuit,” apparently because of casino chips, similar to how people in America will sometimes refer to a hundred dollars as a “benjamin.”
That is, what are TheOtherDave’s preferences between watching an unsettling movie that does not correspond to reality and watching an unsettling movie that does correspond to reality, but they’re paid some cash.
“Quid” is slang, “squid” is a commonly used jokey soundalike. There’s a joke that ends “here’s that sick squid I owe you”.
EDIT: also, never heard “biscuit” = £100 before; that’s a “ton”.
Does Cockney rhyming slang not count as slang?
In this case it seems to. It’s the first time I recall encountering it but I’m not British and my parsing of unfamiliar and ‘rough’ accents is such that if I happened to have heard someone say ‘squid’ I may have parsed it as ‘quid’, and discarded the ‘s’ as noise from people saying a familiar term in a weird way rather than a different term.
It amuses me that despite making neither head nor tail of the unpacking, I answered the right question.
Well, to the extent that my noncommital response can be considered an answer to any question at all.
Well, I figured that much out from googling, but I was more reacting to what seems like a deliberate act of obfuscation on Kawoomba’s part that serves no real purpose.
Nested parentheses are their own reward, perhaps?
In an interesting twist, in many social circles (not here) your use of the word “obfuscation” would be obfuscatin’ in itself.
To be very clear though: “Eschew obfuscation, espouse elucidation.”
So to be clear—you do some Googling and find two videos, one has realistic CGI animal harm, the other real animal harm; assume the CGI etc is so good that I wouldn’t be able to tell which was which if you hadn’t told me. You don’t pay for the animal harm video, or in any way give anyone an incentive to harm an animal in fetching it; just pick up a pre-existing one. I have a choice between watching the fake-harm video (and knowing it’s fake) or watching the real-harm video and receiving £100.
If the reward is £100, I’ll take the £100; if it’s an actual biscuit, I prefer to watch the fake-harm video.
I’m genuinely unsure, not least because of your perplexing unpacking of “biscuit”.
Both examples are unpleasant; I don’t have a reliable intuition as to which is more so if indeed either is.
I have some vague notion that if I watch the real-harm video that might somehow be interpreted as endorsing real-harm more strongly than if I watch the fake-harm vide, like through ratings or download monitoring or something, which inclines me to the fake-harm video. Though whether I’m motivated by the vague belief that such differential endorsement might cause more harm to animals, or by the vague belief that it might cause more harm to my status, I’m again genuinely unsure of. In the real world I usually assume that when I’m not sure it’s the latter, but this is such a contrived scenario that I’m not confident of that either.
If I assume the biscuit is a reward of some sort, then maybe that reward is enough to offset the differential endorsement above, and maybe it isn’t.
I don’t want to see animals get tortured because that would be an unpleasant thing to see, but there are lots of things I think are unpleasant things to see that don’t have moral valence (in another comment I gave the example of seeing corpses get raped).
I might also be willing to assign dolphins and monkeys moral value (I haven’t made up my mind about this), but not most animals.
Do you have another example besides the assault of corpses? I can easily see real moral repugnance from the effect it has on the offenders, who are victims of their own actions. If you find it unpleasant only when you see it, would not they find it horrific when they perform it?
Also in these situations, repugnance can leak due to uncertainty of other real moral outcomes, such as the (however small) likelihood of family members of the deceased learning of the activity, for whom these corpses have real moral value.
Two Girls One Cup?
Seeing humans perform certain kinds of body modifications would also be deeply unpleasant to me, but it’s also not an act I assign moral valence to (I think people should be allowed to modify their bodies more or less arbitrarily).
I’ll chime in to comment that QiaochuYuan’s[1] views as expressed in this entire thread are quite similar to my own (with the caveat that for his “human” I would substitute something like “sapient, self-aware beings of approximately human-level intelligence and above” and possibly certain other qualifiers having to do with shared values, to account for Yoda/Spock/AIs/whatever; it seems like QiaochuYuan uses “approximately human” to mean roughly this).
So, please reconsider your disbelief.
[1] Sorry, the board software is doing weird things when I put in underscores...
So, presumably you don’t keep a pet, and if you did, you would not care for its well-being?
Indeed, I have no pets.
If I did have a pet, it is possible that I would not care for it (assuming animal cruelty laws did not exist), although it is more likely that I would develop an attachment to it, and would come to care about its well-being. That is how humans work, in my experience. I don’t think this necessarily has any implications w.r.t. the moral status of nonhuman animals.
Do you consider young children and very low-intelligence people to be morally-relevant?
(If—in the case of children—you consider potential for later development to be a key factor, we can instead discuss only children who have terminal illnesses.)
Good question. Short answer: no.
Long answer: When I read Peter Singer, what I took away was not, as many people here apparently did, that we should value animals; what I took away is that we should not value fetuses, newborns, and infants (to a certain age, somewhere between 0 and 2 years [1]). That is, I think the cutoff for moral relevant is somewhere above, say, cats, dogs, newborns… where exactly? I’m not sure.
Humans who have a general intelligence so low that they are incapable of thinking about themselves as conscious individuals are also, in my view, not morally relevant. I don’t know whether such humans exist (most people with Down syndrome don’t quite seem to fit that criterion, for instance).
There are many caveats and edge cases, for instance: what if the low-intelligence condition is temporary, and will repair itself with time? Then I think we should consider the wishes of the self that the person was before the impairment, and the rights of their future, non-impaired, selves. But what if the impairment can be repaired using medical technology? Same deal. What if it can’t? Then I would consider this person morally irrelevant. What if the person was of extremely low intelligence, and had always been so, but we could apply some medical intervention to raise their intelligence to at least normal human level? I would consider that act morally equivalent to creating a new sapient being (whether that’s good or bad is a separate question).
So: it’s complicated. But to answer practical questions: I don’t consider infanticide the moral equivalent of murder (although it’s reasonable to outlaw it anyway, as birth is good Schelling point, but the penalty should surely be nowhere near as harsh as for killing an adult or older child). The rights of low-intelligence people is a harder issue partly because there are no obvious cutoffs or metrics.
I hope that answers your question; if not, I’ll be happy to elaborate further.
Ethical generalizations check: Do you care about Babyeaters? Would you eat Yoda?
Would that allow absorbing some of his midichlorians? Black magic! Well, I might try (since he died of natural causes anyway). But yoda dies without leaving a corpse. It would be difficult. The only viable strategy would seem to be to have Yoda anethetize himself a minute before he ghosts (“becomes one with the force”). Then the flesh would remain corporeal for consumption.
The real ethical test would be would I freeze yoda’s head in carbonite, acquire brain scanning technology and upload him into a robot body? Yoda may have religious objections to the practice so I may honour his preferences while being severely disappointed. I suspect I’d choose the Dark Side of the Force myself. The Sith philosophy seems much more compatible with life extension by whatever means necessary.
It should be noted that Yoda has an observable afterlife. Obi-wan had already appeared after his body had died, apparently in full possession of his memories and his reasoning abilities; Yoda proposes to follow in Obi-wan’s footsteps, and has good reason to believe that he will be able to do so.
Sith philosophy, for reference:
Peace is a lie, there is only passion.
Through passion, I gain strength.
Through strength, I gain power.
Through power, I gain victory.
Through victory, my chains are broken.
The Force shall free me.
Actual use of Sith techniques seems to turn people evil at ridiculously accelerated rates. At least in-universe it seems that sensible people would write off this attractive-sounding philosophy as window dressing on an extremely damaging set of psychic techniques.
If you’re lucky, it might grant intrinsic telepathy, as long as the corpse is relatively fresh.
Nope (can’t parse them as approximately human without revulsion). Nope (approximately human).
I wouldn’t eat flies or squids either. But I know that that’s a cultural construct.
Let’s ask another question: would I care if someone else eats Yoda?
Well, I might, but only because eating Yoda is, in practice, correlated with lots of other things I might find undesirable. If I could be assured that such was not the case (for instance, if there was another culture which ate the dead to honor them, that’s why he ate Yoda, and Yoda’s will granted permission for this), then no, I wouldn’t care if someone else eats Yoda.
In practice? In common Yoda-eating practice? Something about down to earth ‘in practice’ empirical observations about things that can not possibly have ever occurred strikes me as broken. Perhaps “would be, presumably, correlated with”.
In Yoda’s case he could even have just asked for permission from Yoda’s force ghost. Jedi add a whole new level of meaning to “Living Will”.
“In practice” doesn’t mean “this is practiced”, it means “given that this is done, what things are, with high probability, associated with it in real-life situations” (or in this case, real-life-+-Yoda situations). “In practice” can apply to rare or unique events.
I really don’t think statements of the form “X is, in practice, correlated with Y” should apply to situations where X has literally never occurred. You might want to say “I expect that X would, in practice, be correlated with Y” instead.
All events have never occurred if you describe them with enough specificity; I’ve never eaten this exact sandwich on this exact day.
While nobody has eaten Yoda before, there have been instances where people have eaten beings that could talk intelligently.
I share Qiaochu’s reasoning.
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn’t, or you would think the reaction irrational? I don’t know.
However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.
~
One can get both protein and deliciousness from non-meat sources.
~
I’m not sure. I don’t think there’s a way I could make that transaction work.
Some interesting things about this example:
Distance seems to have a huge impact when it comes to the bystander effect, and it’s not clear that it’s irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.
Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).
Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they’re allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.
Well, and what would you say to someone who thought that?
I don’t know. It doesn’t feel like I do. You could try to convince me that I do even if you’re a moral anti-realist. It’s plausible I just haven’t spent enough time around animals.
Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person’s character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it’s not because I value corpses.
My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.
Really? This can’t be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
Right now, I don’t know. I feel like it would be playing a losing game. What would you say?
I’m not sure how I would do that. Would you kick a puppy? If not, why not?
How could I verify that you actually refrain from eating meat?
I would probably say something like “you just haven’t spent enough time around them. They’re less different from you than you think. Get to know them, and you might come to see them as not much different from the people you’re more familiar with.” In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I’m not sure how I would go about getting to know a pig.
No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.
Oh, that’s what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn’t convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
“No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.”
All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather a) give up meat for a month b) beat a puppy to death
I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn’t like to see chickens suffering), but the contrast is quite telling.
For what it’s worth, I also wouldn’t treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don’t have to watch. There’s quite a contrast there, as well, but it seems to have little to do with suffering.
That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered.
Probably also to watching it suffer while not being slaughtered.
Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
Sorry, I was unclear: I meant moral (and political) arguments from other people—moral rhetoric if you like—often takes that form.
Ah, gotcha. Yeah, that’s true.
The relevant sense of changing values is change of someone else’s purposeful behavior. The philosophical classification of your views doesn’t seem like useful evidence about that possibility.
I don’t understand what that means for my situation, though. How am I supposed to argue him out of his current values?
I mean, it’s certainly possible to change someone’s values through anti-realist argumentation. My values were changed in that way several times. But I don’t know how to do it.
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
If moral realism were true, there would be a very obvious path to arguing someone out of their values—argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.
Moral anti-realism certainly complicates things.
That doesn’t necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
This also applies to foreigners, though.
Well, it also applies to blood relatives, for that matter.
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
That’s a very good question.
I don’t know.
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
See Arneson’s What, if anything, renders all humans morally Equal?
edit: can’t get the syntax to work, but here’s the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
We can discuss that world from this one.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
No, the latter was an afterthought. The discussion begins here.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
If that was the case there would be no one to do the discussing.
Well, we could discuss that world from this one.
Yes, and we could, for example, assign that world no moral significance relative to our world.
I found it interesting to compare “this is the price at which we could buy animals not existing” to the “this is the price people are willing to pay for animals to exist so they can eat them,” because it looks like the second is larger, often by orders of magnitude. (This shouldn’t be that surprising for persuasion; if you can get other people to spend their own resources, your costs are much lower.)
It also bothers me that the so many of the animals saved are fish; they dominate the weighted mean, have very different lifespans from chickens, and to the best of my knowledge cannot be ‘factory farmed’ in the same way. [Edit: It appears that conditions for fish on fish farms are actually pretty bad, to the point that many species of fish cannot survive modern farming techniques. So, no comment on the relative badness.]
From what I know, fish farming doesn’t sound pleasant, though perhaps it’s not nearly as bad as chicken farming.
If that description makes you think that fish farming might possibly be in the same ballpark as chicken farming, then you’re pretty ignorant of factory farming. Maybe you haven’t seen enough propaganda?
Your other link is about killing the fish. Focus on the death rather than life may be good for propaganda, but do you really believe that the much of the suffering is there? Indeed, your post claimed to be about days of life.
Added: it makes me wonder if activists are corrupted by dealing with propaganda to focus on the aspects for which propaganda is most effective. Or maybe it’s just that the propaganda works on them.
I never said they were in the same ballpark. Just that fish farming is also something I don’t like.
~
Yes, I do.
~
I agree that might not make much sense for fish, except in so far as farming causes more fish to be birthed than otherwise would.
~
I think this is a bias that is present in any kind of person that cares about advocating for or against a cause.
Here’s a gruesome video on the whole fish thing if you’re in to gruesome videos.
Well, they can move more, but on the other hand they tend to pollute each others’ environment in a way that terrestrial farmed animals do not, meaning that not all commercially fished species can survive being farmed with modern techniques, and those which can are not necessarily safe for humans to eat in the same quantities.
There are decent arguments (e.g. this) for eating less meat even if you don’t care about non-human animals as a terminal value.
You may want to take a look at this brief list of relevant writings I compiled in response to a comment by SaidAchmiz.
YMMV, but the argument that did it for me was Mylan Engel, Jr’s argument, as summarized and nicely presented here.
On the assumption that the figures given by the OP are approximately right, with my adjustments for personal values, it would be cost-effective for me to pay you $18 (via BTC) to go from habitual omnivory to 98% ovo-lacto-vegetarianism for a year, or $24 (via BTC) to go for habitual omnivory to 98% veganism for a year, both prorated by month, of course with some modicum of evidence that the change was real. Let me know if you want to take up the offer.
Looking over that argument, in the second link, I notice that those same premises would appear to support the conclusion that the most morally correct action possible would be to find some way to sterilize every vertabrate (possibly through some sort of genetically engineered virus). If there is no next generation—of anything, from horses to cows to tigers to humans to chickens—then there will be no pain and suffering experienced by that next generation. The same premises would also appear to support the conclusion that, having sterilised every vertabrate on the planet, the next thing to do is to find some painless way of killing every vertebrate on the planet, lest they suffer a moment of unnecessary pain or suffering.
I find both of these potential conclusions repugnant; I recognise this as a mental safety net, warning me that I will likely regret actions taken in support of these conclusions in the long term.
This is an argument for vegetarianism, not for caring about animal suffering: many parts of this argument have nothing to do with animal suffering but are arguments that humans would be better off if we ate less meat, which I’m also willing to entertain (since I do care about human suffering), but I was really asking about animal suffering.
$18 a year is way too low.
I’m not offering a higher price since it seems cost ineffective compared to other opportunities, but I’m curious what your price would be for a year of 98% veganism. (The 98% means that 2 non-vegan meals per month are tolerated.)
In the neighborhood of $1,000.
I’m less willing to entertain said arguments seeing as how they come from people who are likely to have their bottom lines already written.
I started reading the argument (in your second link), racked up a full hand of premises I disagreed with or found to be incoherent or terribly ill-defined before getting to so much as #10, and stopped reading.
Then I decided that no, I really should examine any argument that convinced an intelligent opponent, and read through the whole thing (though I only skimmed the objections, as they are laughably weak compared to the real ones).
Turns out my first reaction was right: this is a silly argument. Engel lists a number of premises, most of which I disagree with, launches into a tangent about environmental impact, and then considers objections that read like the halfhearted flailings of someone who’s already accepted his ironclad reasoning. As for this:
It makes me want to post the “WAT” duck in response. Like, is he serious? Or is this actually a case of carefully executed trolling? I begin to suspect the latter...
Edit: Oh, and as Qiaochu_Yuan says, the argument assumes that we care about animal suffering, and so does not satisfy the request in the grandparent.
Based on your description here of your reaction, I get the impression that you mistook the structure of the argument. Specifically, you note, as if it were sufficient, that you disagree with several of the premises. Engel was not attempting to build on the conjunction (p1*p2*...*p16) of the premises; he was building on their disjunction (p1+p2+...+p16). Your credence in p1 through p16 would have to be uniformly very low to keep their disjunction also low. Personally, I give high credence to p1, p9, p10, and varying lower degrees of assent to the other premises, so the disjunction is also quite high for me, and therefore the conclusion has a great deal of strength; but even if I later rejected p1, p9, and p10, the disjunction of the others would still be high. It’s that robustness of the argument, drawing more on many weak points than one strong one, that convinced me.
I don’t understand your duck/troll response to the quote from Engel. Everything he has said in that paragraph is straightforward. It is important that beliefs be true, not merely consistent. That does mean you oughtn’t simply reject whichever premises get in the way of the conclusions you value. p1-p16 are indeed entangled with many other beliefs, and propagating belief and value updates of rejecting more of them is likely, in most people, to be a more severe change than becoming vegetarian. Really, if you find yourself suspecting that a professional philosopher is trolling people in one of his most famous arguments, that’s a prime example of a moment to notice the fact that you’re confused. It’s possible you were reading him as saying something he wasn’t saying.
Regarding the edit: the argument does not assume that you care about animal suffering. I brought it up precisely because it didn’t make that assumption. If you want something specifically about animal suffering, presumably a Kantian argument is the way to go: You examine why you care about yourself and you find it is because you have certain properties; so if something else has the same properties, to be consistent you should care about it also. (Obviously this depends on what properties you pick.)
That’s possible, but I don’t think that’s the case. But let me address the argument in a bit more detail and perhaps we’ll see if I am indeed misunderstanding something.
First of all, this notion that the disjunction of the premises leads to accepting the conclusion is silly. No one of the premises leads to accepting the conclusion. You have to conjoin at least some of them to get anywhere. It’s not like they’re independent, leading by entirely separate lines of reasoning to the same outcome; some clearly depend on others to be relevant to the argument.
And I’m not sure what sort of logic you’re using wherein you believe p1 with low probability, p2 with low probability, p3 … etc., and their disjunction ends up being true. (Really, that wasn’t sarcasm. What kind of logic are you applying here...?) Also, some of them are actually nonsensical or incoherent, not just “probably wrong” or anything so prosaic.
The quoted paragraph:
You’re right, I guess I have no idea what he’s saying here, because this seems to me blatantly absurd on its face. If you’re interested in truth, of course you’re going to reject those beliefs most likely to be false. That’s exactly what you’re going to do. The opposite of that is what you would do if you were, in fact, interested in mere consistency rather than truth.
??? You will want to reject those and only those beliefs that are false. If you think your belief system is reasonable, then you don’t think any of your beliefs are false, or else you’d reject them. If you find that some of your beliefs are false, you will want to reject them, because if you’re interested in truth then you want to hold zero false beliefs.
I think that accepting many of (p1) – (p16) causes incoherence, actually. In any case, Engel seems to be describing a truly bizarre approach to epistemology where you care less about holding true beliefs than about not modifying your existing belief system too much, which seems like a perfect example of caring more about consistency than truth, despite him describing his view in the exact opposite manner, and… I just… I don’t know what to say.
And when I read your commentary on the above, I get the same ”… what the heck? Is he… is he serious?” reaction.
What does this mean? Should I take this as a warning against motivated cognition / confirmation bias? But what on earth does that have to do with my objections? We reject premises that are false. We accept premises that are true. We accept conclusions that we think are true, which are presumably those that are supported by premises we think are true.
… and? Again, we should hold beliefs we think are true and reject those we think are false. How on earth is picking which beliefs to accept and which to reject on the basis of what will require less updating… anything but absurd? Isn’t that one of the Great Epistemological Sins that Less Wrong warns us about?
As for the duck comment… professional philosophers troll people all the time. Having never encountered Engel’s writing before now, I of course did not know that this was his most famous argument, nor any basis for being sure of serious intent in that paragraph.
Engel apparently claims that his reader already holds these beliefs, among others:
(And without that, the argument falls down.)
(Hi, sorry for the delayed response. I’ve been gone.)
Just the standard stuff you’d get in high school or undergrad college. Suppose we have independent statements S1 through Sn, and you assign each a subjective probability of P(Si). Then you have the probability of the disjunction P(S1+S2+S3+...+Sn) = 1-P(~S1)*P(~S2)*P(~S3)*...*P(~Sn). So if in a specific case you have n=10 and P(Si)=0.10 for all i, then even though you’re moderately disposed to reject every statement, you’re weakly disposed to accept the disjunction, since P(disjunction)=0.65. This is closely related to the preface paradox.
You’re right, of course, that Engel’s premises are not all independent. The general effect on probability of disjunctions remains always in the same direction, though, since P(A+B)≥P(A) for all A and B.
OK, yes, you’ve expressed yourself well and it’s clear that you’re intepreting him as having claimed the opposite of what he meant. Let me try to restate his paragraph in more LW-ish phrasing:
“As a rationalist, you are highly interested in truth, which requires consistency but also requires a useful correspondence between your beliefs and reality. Consequently, when you consider that you believe it is not worthwhile for you to value animal interests and you discover that this belief is inconsistent with other of your beliefs, you will not reject just any of those other beliefs you think most likely to be false. (You will subject the initial, motivated belief to equal, unprivileged scrutiny along with the others, and tentatively accept the mutually consistent set of beliefs with the highest probability given your current evidence.)”
If you’re interested in reconsidering Engel’s argument given his intended interpretation of it, I’d like to hear your updated reasons for/against it.
Welcome back.
Ok. I am, actually, quite familiar with how to calculate probabilities of disjunctions; I did not express my objection/question well, sorry. What I was having a hard time taking at face value was the notion of reasoning about moral propositions using this sort of probabilistic logic. That is to say: what, exactly, does it mean to say that you believe “We ought to take steps to make the world a better place” with P = 0.3? Like, maybe we should and maybe we shouldn’t? Probabilities are often said to be understandable as bets; what would you be betting on, in this case? How would you settle such a bet?
In short, for a lot of these propositions, it seems nonsensical to talk about levels of credence, and so what makes sense for reasoning about them is just propositional logic. In which case, you have to assert that if ANY of these things are true, then the entire disjunction is true (and from that, we conclude… something. What, exactly? It’s not clear).
And yet, I can’t help but notice that Engel takes an approach that’s not exactly either of the above. He says:
I don’t know how to interpret that. It seems strange. Logical arguments do not generally work this way, wherein you just have an unordered heap of undifferentiated, independent propositions, which you add up in any old order, and build up some conclusion from them like assembling a big lump of clay from smaller lumps of clay. I don’t rightly know what it would mean for an argument to work like that.
(In other words, my response to the Engel quote above is: “Uh, really? Why...?”)
As for your restatement of Engel’s argument… First of all, I’ve reread that quote from Engel at the end of the PDF, and it just does not seem to me like he is saying what you claim he’s saying. It seems to me that he is suggesting (in the last sentence of the quote) we reason backwards from which beliefs would force less belief revision to which beliefs we should accept as true.
But, ok. Taking your formulation for granted, it still seems to be… rather off. To wit:
Well, here’s the thing. It is certainly true that holding nothing but true beliefs will necessarily imply that your beliefs are consistent with each other. (Although it is possible for there to be apparent inconsistencies, which would be resolved by the acquisition of additional true beliefs.) However, it’s possible to find yourself in a situation where you gain a new belief, find it to be inconsistent with one or more old beliefs, and yet find that, inconsistency aside, both the new and the old beliefs each are sufficiently well-supported by the available evidence to treat them as being true.
At this point, you’re aware that something is wrong with your epistemic state, but you have no real way to determine what that is. The rational thing to do here is of course to go looking for more information, more evidence, and see which of your beliefs are confirmed and which are disconfirmed. Until then, rearranging your entire belief system is premature at best.
Why do you character the quoted belief as “motivated”? We are assuming, I thought, that I’ve arrived at said belief by the same process as I arrive at any other beliefs. If that one’s motivated, well, it’s presumably no more motivated than any of my other beliefs.
And, in any case, why are we singling out this particular belief for consistency-checking? Engel’s claim that “accepting [the conclusion of becoming a vegetarian] would require minimal belief revision on your part” seems the height of silliness. Frankly, I’m not sure what could make someone say that but a case of writing one’s bottom line first.
Again I say: the correct thing to do is to hold (that is, to treat as true) those beliefs which you think are more likely true than false, and not any beliefs which you think are more likely false than true. Breaking that rule of thumb for consistency’s sake is exactly the epistemic sin which we are supposedly trying to avoid.
But you know what — all of this is a lot of elaborate round-the-bush-dancing. I think it would be far more productive (as these things go) to just look at that list of propositions, see which we accept, and then see if vegetarianism follows reasonably from that. That is to say, rather than analyzing whether the structure of Engel’s argument works in theory, let’s put it to the test on his actual claims, yes?
I’d be betting on whether or not the proposition would follow from the relevant moral theory if I were in possession of all the relevant facts. The bet would be settled by collecting additional facts and updating. I incline toward consequentialist moral theories in which practicality requires that I can never possess all the relevant facts. So it is reasonable for me to evaluate situational moral rules and claims in probabilistic terms based on how confident I am that they will actually serve my overarching moral goals.
As far as I’m aware, that’s exactly how logical arguments work, formally. See the second paragraph here.
Meat tastes good and is a great source of calories and nutrients. That’s powerful motivation for bodies like us. But you can strike that word if you prefer.
We aren’t. We’re requiring only and exactly that it not be singled out for immunity to consistency-checking.
That’s it! That’s exactly the structure of Engel’s argument, and what he was trying to get people to do. :)
That is well and good, except that “making the world a better place” seems to be an overarching moral goal. At some point, we hit terminal values or axioms of some sort. “Whether a proposition would follow from a moral theory” is conceivably something you could bet on, but what do you do when the proposition in question is part of the relevant moral theory?
Certainly not. Engel does not offer any deductive system for getting from the premises to the conclusion. In the derivation of an argument (as alluded to by the linked SEP article), premises and intermediate conclusions have to be ordered (at least partially ordered). Engel seems to be treating his premises as undifferentiated lumps, which you can take in any order, without applying any kind of deduction to them; you just take each ounce of premise and pour it into the big bucket-’o-premise, and see how much premise you end up with; if it’s a lot of premise, the conclusion magically appears. The claim that it doesn’t even matter which premises you hold to be true, only the quantity of them, seems to explicitly reject logical deduction.
Alright then. To the object level!
Let’s see...
Depends on how “pain” and “suffering” are defined. If you define “suffering” to include only mental states of sapient beings, of sufficient (i.e. at least roughly human-level) intelligence to be self-aware, and “pain” likewise, then sure. If you include pain experienced by sub-human animals, and include their mental states in “suffering”, then first of all, I disagree with your use of the word “suffering” to refer to such phenomena, and second of all, I do not hold (p1) under such a formulation.
See (p1).
If by “cruelty” you mean … etc. etc., basically the same response as (p1). Humans? Agreed. Animals? Nope.
Depends on the steps. If by this you mean “any steps”, then no. If by this you mean “this is a worthy goal, and we should find appropriate steps to achieve and take said steps”, then sure. We’ll count this one as a “yes”. (Of course we might differ on what constitutes a “better” world, but let’s assume away such disputes for now.)
Agreed.
First of all, this is awfully specific and reads like a way to sneak in connotations. I tend to reject such formulations on general principles. In any case, I don’t think that “morally good person” is a terribly useful concept except as shorthand. We’ll count this one as a “no”.
Pursuant to the caveats outlined in my responses to all of the above propositions… sure. Said caveats partially neuter the statement for Engel’s purposes, but for generosity’s sake let’s call this a “yes”.
See response to (p5); this is not very meaningful. So, no.
Yep.
I try not to think of myself in terms of “what sort of person” I am. As for whether reducing the amount of pain and suffering is a good thing and whether I should do it — see (p4) and (p4′). But let’s call this a “yes”.
This seems relatively uncontroversial.
Nope. (And see (p1) re: “suffering”.)
Nope.
Whether we “ought to” do this depends on circumstances, but this is certainly not inherently true in a moral sense.
Nope.
I’ll agree with this to a reasonable extent.
Sure.
So, tallying up my responses, and ignoring all waffling and qualifications in favor of treating each response as purely binary for the sake of convenience… it seems I agree with 7 of the 17 propositions listed. Engel then says:
So according to this, it seems that I should have a… moderate commitment to the immorality of eating meat? But here’s the problem:
How does the proposition “eating meat is immoral” actually follow from the propositions I assented to? Engel claims that it does, but you can’t just claim that a conclusion follows from a set of premises, you have to demonstrate it. Where is the demonstration? Where is the application of deductive rules that takes us from those premises to the conclusion? There’s nothing, just a bare set of premises and then a claimed conclusion, with nothing in between, no means of getting from one to the other.
My usual reply to a claim that a philosophical statement is “proven formally” is to ask for a computer program calculating the conclusion from the premises, in the claimant’s language of choice, be is C or Coq.
Oh, really? ;)
string calculate_the_conclusion(string the_premises[])
{
return “The conclusion. Q.E.D.”;
}
This function takes the premises as a parameter, and returns the conclusion. Criterion satisfied?
Yes, it explicates the lack of logic, which is the whole point.
I confess to being confused about your intended point. I thought you were more or less agreeing with me, but now I am not so sure?
Yes I was. My point was that if one writes a program that purports to prove that
then the code can be examined and the hidden assumptions and inferences explicated. In the trivial example you wrote the conclusion is assumed, so the argument that it is proven from the propositions (by this program) is falsified.
Ah. Yeah, agreed. Of course, enough philosophers disdain computer science entirely that the “arguments” most in need of such treatment would be highly unlikely to receive it. “Argument by handwaving” or “argument by intimidation” is all too common among professional philosophers.
The worst part is how awkward it feels to challenge such faux-arguments. “Uh… this… what does this… say? This… doesn’t say anything. This… this is actually just a bunch of nonsense. And the parts that aren’t nonsense are just… just false. Is this… is this really supposed to be the argument?”
Hence my insistence on writing it up in a way a computer would understand.
That doesn’t even pass a quick inspection test for”can do something different when handed different parameters” .
The original post looks at least as good as: int calculate_the_conclusion(string premises_acceptedbyreader[]) { int result=0; foreach(mypremise in reader’s premise){result++;} return result. }
-note the “at least”.
OK, since you are rejecting formal logic I’ll agree we’ve reached a point where no further agreement is likely.
Uh, with all respect, claiming that I am the one rejecting formal logic here is outlandishly absurd.
I have to ask: did you, in fact, read the entirety of my post? Honest question; I’m not being snarky here.
If you did (or do) read it, and still come to the conclusion that what’s going on here is that I am rejecting formal logic, then I guess we have exhausted the fruitfulness of the discussion.
I don’t think there’s a subthread here about posthumans here yet, which surprises me. Most of the other points I’d think to make have been made by others.
Several times you specify that you care about humanity, because you are able to have relationships with humans. A few questions:
1) SaidAchmiz, whose views seem similar to yours, specified they hadn’t owned pets. Have you owned pets?
While this may vary from person to person, it seems clear to me that people are able to form relationships with dogs, cats, rats, and several other types of mammals (this is consistent with the notion that more-similar animals are able to form relationships with each other, on a sliding scale).
I’ve also recently made a friend with two pet turtles. One of the turtles seems pretty bland and unresponsive, but the other seems incredibly interested in interaction. I expect that some amount of the perceived relationship between my friend and their turtle is human projection, but I’ve still updated quite a bit on the relative potential-sentience of turtles. (Though my friend’s veterinarian did said the turtle is an outlier in terms of how much personality a turtle expresses)
2) You’ve noted that you don’t care about babyeaters. Do you care about potential posthumans who share all values you currently have, but have new values you don’t care about one way or another, are vastly more intelligent/empathetic/able-to-form-complex-relationships that you can’t understand? Do you expect those humans to care about you?
I’m not sure how good an argument it is that “we should care about things dumber than us because we’d want smarter things to care about us”, in the context of aliens who might not share our values at all. But it seems at least a little relevant, when specifically concerning the possibility of trans-or-posthumans.
3) To the extent that you are not able to form relationships with other humans (because they are stupider than you, because they are less empathetic, or just because they’re jerks, or don’t share enough interests with you), do you consider them to have less moral worth? If not, why not?
Intellectually, I’m interested in the question: what moral framework would Extrapolatedd-Qiaochu-Yuan endorse (since, again, I’m an anti-realist).
I had fish once, but no complicated pets.
People are also able to form relationships of this kind with, say, ELIZA or virtual pets in video games or waifus. This is an argument in favor of morally valuing animals, but I think it’s a weak one without more detail about the nature of these relationships and how closely they approximate full human relationships.
Depends. If they can understand me well enough to have a relationship with me analogous to the relationship an adult human might have with a small child, then sure.
I hid a lot of complexity in “in principle.” This objection also applies to humans who are in comas, for example, but a person being in a coma or not sharing my interests is a contingent fact, and I don’t think contingent facts should affect what beings have moral worth. I can imagine possible worlds reasonably close to the actual one in which a person isn’t in a coma or does share my interests, but I can’t imagine possible worlds reasonably close to the actual one in which a fish is complicated enough for me to have a meaningful relationship with.