What are the strongest arguments you can offer me in favor of caring about animal suffering to the point that I would be willing to incur the costs involved in becoming more vegetarian?
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn’t, or you would think the reaction irrational? I don’t know.
However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.
~
Also, meat is delicious and contains protein.
One can get both protein and deliciousness from non-meat sources.
~
Alternatively, how much would you be willing to pay me to stop eating meat?
I’m not sure. I don’t think there’s a way I could make that transaction work.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
Some interesting things about this example:
Distance seems to have a huge impact when it comes to the bystander effect, and it’s not clear that it’s irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.
Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).
Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they’re allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.
To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Well, and what would you say to someone who thought that?
Also, do you really not value animals?
I don’t know. It doesn’t feel like I do. You could try to convince me that I do even if you’re a moral anti-realist. It’s plausible I just haven’t spent enough time around animals.
I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person’s character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it’s not because I value corpses.
One can get both protein and deliciousness from non-meat sources.
My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.
I’m not sure. I don’t think there’s a way I could make that transaction work.
Really? This can’t be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
Right now, I don’t know. I feel like it would be playing a losing game. What would you say?
I would probably say something like “you just haven’t spent enough time around them. They’re less different from you than you think. Get to know them, and you might come to see them as not much different from the people you’re more familiar with.” In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I’m not sure how I would go about getting to know a pig.
I’m not sure how I would do that. Would you kick a puppy? If not, why not?
No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.
How could I verify that you actually refrain from eating meat?
Oh, that’s what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn’t convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
“No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.”
All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather
a) give up meat for a month
b) beat a puppy to death
I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn’t like to see chickens suffering), but the contrast is quite telling.
For what it’s worth, I also wouldn’t treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don’t have to watch. There’s quite a contrast there, as well, but it seems to have little to do with suffering.
That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered. Probably also to watching it suffer while not being slaughtered.
Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values.
The relevant sense of changing values is change of someone else’s purposeful behavior. The philosophical classification of your views doesn’t seem like useful evidence about that possibility.
I don’t understand what that means for my situation, though. How am I supposed to argue him out of his current values?
I mean, it’s certainly possible to change someone’s values through anti-realist argumentation. My values were changed in that way several times. But I don’t know how to do it.
How am I supposed to argue him out of his current values?
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
If moral realism were true, there would be a very obvious path to arguing someone out of their values—argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.
I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction.
That doesn’t necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance?
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
If you’re using sapience as a synonym for language, language is not a crisp category either.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
Ultimately this just seems like a veiled way to specially privilege humans,
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us is that it is specifically human suffering.
No, the latter was an afterthought. The discussion begins here.
I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I don’t think which reasons happen to psychologically motivate us matters here.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
No, the latter was an afterthought. The discussion begins here.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
If that was the case there would be no one to do the discussing.
I am a moral anti-realist, so I don’t think there’s any argument I could give you to persuade you to change your values. To me, it feels very inconsistent to not value animals—it sounds to me exactly like someone who wants to know argument about why they ought to care about foreigners.
Also, do you really not value animals? I think if you were to see someone torturing an animal in front of you for fun, you would have some sort of negative reaction. Though maybe you wouldn’t, or you would think the reaction irrational? I don’t know.
However, if you really do care about humans and humans alone, the environmental argument still has weight, though certainly less.
~
One can get both protein and deliciousness from non-meat sources.
~
I’m not sure. I don’t think there’s a way I could make that transaction work.
Some interesting things about this example:
Distance seems to have a huge impact when it comes to the bystander effect, and it’s not clear that it’s irrational. If you are the person who is clearly best situated to save a puppy from torture, that seems different from the fact that dogs are routinely farmed for meat in other parts of the world, by armies of people you could not hope to personally defeat or control.
Someone who is willing to be sadistic to animals might be sadistic towards humans as well, and so they may be a poor choice to associate with (and possibly a good choice to anti-associate with).
Many first world countries have some sort of law against bestiality. (In the US, this varies by state.) However, any justification for these laws based on the rights of the animals would also rule out related behavior in agribusiness, which is generally legal. There seems to be a difference between what people are allowed to do for fun and what they’re allowed to do for profit; this makes sense in light of viewing the laws as not against actions, but kinds of people.
Well, and what would you say to someone who thought that?
I don’t know. It doesn’t feel like I do. You could try to convince me that I do even if you’re a moral anti-realist. It’s plausible I just haven’t spent enough time around animals.
Probably. I mean, all else being equal I would prefer that an animal not be tortured, but in the case of farming and so forth all else is not equal. Also, like Vaniver said, any negative reaction I have directed at the person is based on inferences I would make about that person’s character, not based on any moral weight I directly assign to what they did. I would also have some sort of negative reaction to someone raping a corpse, but it’s not because I value corpses.
My favorite non-meat dish is substantially less delicious than my favorite meat dish. I do currently get a decent amount of protein from non-meat sources, but asking someone who gets their protein primarily from meat to give up meat means asking them to incur a cost in finding and purchasing other sources of protein, and that cost needs to be justified somehow.
Really? This can’t be that hard a problem to solve. We could use a service like Fiverr, with you paying me $5 not to eat meat for some period of time.
Right now, I don’t know. I feel like it would be playing a losing game. What would you say?
I’m not sure how I would do that. Would you kick a puppy? If not, why not?
How could I verify that you actually refrain from eating meat?
I would probably say something like “you just haven’t spent enough time around them. They’re less different from you than you think. Get to know them, and you might come to see them as not much different from the people you’re more familiar with.” In other words, I would bet on the psychological unity of mankind. Some of this argument applies to my relationship with the smarter animals (e.g. maybe pigs) but not to the dumber ones (e.g. fish). Although I’m not sure how I would go about getting to know a pig.
No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.
Oh, that’s what you were concerned about. It would be beneath my dignity to lie for $5, but if that isn’t convincing, then I dunno. (On further thought, this seems like a big problem for measuring the actual impact of any proposed vegetarian proselytizing. How can you verify that anyone actually refrains from eating meat?)
“No. Again, all else being equal, I would prefer that animals not suffer, but in the context of reducing animal suffering coming from human activity like farming, all else is not equal. I wouldn’t chop down a tree either, but it’s not because I think trees have moral value, and I don’t plan to take any action against the logging industry as a result.”
All else is never precisely equal. If I offered you £100 to do one of these of your choice, would you rather a) give up meat for a month b) beat a puppy to death
I suspect that the vast majority of people who eat battery chicken to save a few dollars would require much more money to directly cause the same sort of suffering to a chicken. Whereas when it came to chopping down trees it would be more a matter of if the cash was worth the effort. Of course, it could very easily be that the problem here is not with Person A (detached, callous eater of battery chicken) but with Person B (overemphathic anthrophomorphic person who doesn’t like to see chickens suffering), but the contrast is quite telling.
For what it’s worth, I also wouldn’t treat painlessly and humanely slaughtering a chicken who has lived a happy and fulfilled life with my own hands equivalently to paying someone else to do so where I don’t have to watch. There’s quite a contrast there, as well, but it seems to have little to do with suffering.
That said, I would almost undoubtedly prefer watching a chicken be slaughtered painlessly and humanely to watching it suffer while being slaughtered.
Probably also to watching it suffer while not being slaughtered.
Mostly, I conclude that my preferences about what I want to do, what I want to watch, and what I want to have done on my behalf, are not well calibrated to one another.
Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
Sorry, I was unclear: I meant moral (and political) arguments from other people—moral rhetoric if you like—often takes that form.
Ah, gotcha. Yeah, that’s true.
The relevant sense of changing values is change of someone else’s purposeful behavior. The philosophical classification of your views doesn’t seem like useful evidence about that possibility.
I don’t understand what that means for my situation, though. How am I supposed to argue him out of his current values?
I mean, it’s certainly possible to change someone’s values through anti-realist argumentation. My values were changed in that way several times. But I don’t know how to do it.
This is a separate question. I was objecting to the relevance of invoking anti-realism in connection with this question, not to the bottom line where that argument pointed.
If moral realism were true, there would be a very obvious path to arguing someone out of their values—argue for the correct values. In my experience, when people want an argument to change their values, they want an argument for what the correct value is, assuming moral realism.
Moral anti-realism certainly complicates things.
That doesn’t necessarily mean that I have animals being tortured as a negative terminal value: I might only dislike that because it generates negative warm fuzzies.
This also applies to foreigners, though.
Well, it also applies to blood relatives, for that matter.
Unfortunately, the typical argument in favour of caring about foreigners, people of other races, etc., is that they are human too.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
That’s a very good question.
I don’t know.
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
See Arneson’s What, if anything, renders all humans morally Equal?
edit: can’t get the syntax to work, but here’s the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
We can discuss that world from this one.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
No, the latter was an afterthought. The discussion begins here.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
If that was the case there would be no one to do the discussing.
Well, we could discuss that world from this one.
Yes, and we could, for example, assign that world no moral significance relative to our world.