If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance?
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
If you’re using sapience as a synonym for language, language is not a crisp category either.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
Ultimately this just seems like a veiled way to specially privilege humans,
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us is that it is specifically human suffering.
No, the latter was an afterthought. The discussion begins here.
I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I don’t think which reasons happen to psychologically motivate us matters here.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
No, the latter was an afterthought. The discussion begins here.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
If that was the case there would be no one to do the discussing.
If distinct races were instead distinct human subspecies or closely-related species, would the moral case for treating these groups equally ipso facto collapse?
If not, then ‘they’re human too’ must be a stand-in for some other feature that’s really doing the pushing and pulling of our moral intuitions. At the very least, we need to taboo ‘human’ to figure out what the actual relevant concept is, since it’s not the standard contemporary biological definition.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
That’s a very good question.
I don’t know.
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
See Arneson’s What, if anything, renders all humans morally Equal?
edit: can’t get the syntax to work, but here’s the link: www.philosophyfaculty.ucsd.edu/faculty/rarneson/singer.pdf
So what do you think of ‘sapient’ as a taboo for ‘human’? Necessary conditions on sapience will, I suppose, but things like language use and sensation. As for those mentally handicapped enough to fall below sapience, I’m willing to bite the bullet on that so long as we’re willing to discuss indirect reasons for according something moral respect. Something along the lines of Kant’s claim that cruelty to animals is wrong not because of the rights of the animal (who has none) but because wantonly harming a living thing damages the moral faculties of the agent.
How confident are you that beings capable of immense suffering, but who haven’t learned any language, all have absolutely no moral significance? That we could (as long as it didn’t damage our empathy) brutally torture an arbitrarily large number of languageless beings for their entire lifetimes and never even cause as much evil as would one momentary dust speck to a language-user (who meets the other sapience conditions as well)?
I don’t see any particular reason for this to be the case, and again the risks of assuming it and being wrong seem much greater than the risks of assuming its negation and being wrong.
I’m not committed to this, or anything close. What I’m committed to is the ground of moral respect being sapience, and whatever story we tell about the moral respect accorded to non-sapient (but, say, sentient) beings is going to relate back to the basic moral respect we have for sapience. This is entirely compatible with regarding sentient non-language-users as worthy of protection, etc. In other words, I didn’t intend my suggestion about a taboo replacement to settle the moral-vegetarian question. It would be illicit to expect a rephrasing of the problem to do that.
So to answer your question:
I donno, I didn’t claim that they had no moral significance. I am pretty sure that if the universe consisted only of sentient but no sapient beings, I would be at a loss as to how we should discuss moral significance.
“Sapience” is not a crisp category. Humans are more sapient than chimpanzees, crows, and dogs. Chimpanzees, crows, and dogs are more sapient than house cats and fish. Some humans are more or less sapient than other humans.
Suppose one day we encounter a non-human intelligent species that is to us as we are to chimpanzees. Would suggest a species be justified in considering us as non-sapient and unworthy of moral respect?
I don’t think sapience and/or sentience is necessarily a bad place to start. However I am very skeptical of attempts to draw hard lines that place all humans in one set, and everything else on Earth in another.
Well, I was suggesting a way of making it pretty crisp: it requires language use. None of those other animals can really do that. But to the extent that they might be trained to do so, I’m happy to call those animals sapient. What’s clear is that, for example, dogs, cows, or chickens are not at all sapient by this standard.
No, but I think the situation you describe is impossible. That intelligent species (assuming they understood us well enough to make this judgement) would recognize that we’re language-users. Chimps aren’t.
Sorry, still not crisp. If you’re using sapience as a synonym for language, language is not a crisp category either. Crows and elephants have demonstrated abilities to communicate with other members of their own species. Chimpanzees can be taught enough language to communicate bidirectionally with humans. Exactly what this means for animal cognition and intelligence is a matter of much dispute among scientists, as is whether animals can really be said to use language or not; but the fact that it is disputed should make it apparent that the answer is not obvious or self-evident. It’s a matter of degree.
Ultimately this just seems like a veiled way to specially privilege humans, though not all of them. Is a stroke victim with receptive aphasia nonsapient? You might equally well pick the use of tools to make other tools, or some other characteristic to draw the line where you’ve predetermined it will be drawn; but it would be more honest to simply state that you privilege Homo sapiens sapiens, and leave it at that.
Not a synonym. Language use is a necessary condition. And by ‘language use’ I don’t mean ‘ability to communicate’. I mean more strictly something able to work with things like syntax and semantics and concepts and stuff. We’ve trained animals to do some pretty amazing things, but I don’t think any, or at least not more than a couple, are really language users. I’m happy to recognize the moral worth of any there are, and I’m happy to recognize a gradient of worth on the basis of a gradient of sapience. I don’t think anything we’ve encountered comes close to human beings on such a gradient, but that might just be my ignorance talking.
It’s not veiled! I think humans are privileged, special, better, more significant, etc. And I’m not picking an arbitrary part of what it means to be human. I think this is the very part that, were we to find it in a computer or an alien or an animal would immediately lead us to conclude that this being had moral worth.
Are you seriously suggesting that the difference between someone you can understand and someone you can’t matters just as much as the difference between me and a rock? Do you think your own moral worth would vanish if you were unable to communicate with me?
Yes, I’m suggesting both, on a certain reading of ‘can’ and ‘unable’. If I were, in principle, incapable of communicating with anyone (in the way worms are) then my moral worth, or anyway the moral worth accorded to sapient beings on the basis of their being sapient on my view, would disappear. I might have moral worth for other reasons, though I suspect these will come back to my holding some important relationship to sapient beings (like formerly being one).
If you are asking whether my moral worth would disappear if I, a language user, were by some twist of fate made unable to communicate, then my moral worth would not disappear (since I am still a language user).
The goal of defining ‘human’ (and/or ‘sapient’) here is to steel-man (or at least better understand) the claim that only human suffering matters, so we can evaluate it. If “language use and sensation” end up only being necessary or sufficient for concepts of ‘human’ that aren’t plausible candidates for the original ‘non-humans aren’t moral patients’ claim, then they aren’t relevant. The goal here isn’t to come up with the one true definition of ‘human’, just to find one that helps with the immediate task of cashing out anthropocentric ethical systems.
Well, you’d be at a loss because you either wouldn’t exist or wouldn’t be able to linguistically express anything. But we can still adopt an outsider’s perspective and claim that universes with sentience but no sapience are better when they have a higher ratio of joy to suffering, or of preference satisfaction to preference frustration.
Right, exactly. Doing so, and defending an anthropocentric ethical system, does not entail that it’s perfectly okay to subject sentient non-language users to infinite torture. It does probably entail that our reasons for protecting sapient non-language users (if we discover it ethically necessary to do so as anthropocentrists) will come down to anthropocentric reasons. This argument didn’t begin as an attempt to steel-man the claim that only human suffer matters, it began as an attempt to steel-man the claim that the reason human suffering matters to us (when we have no other reason to care) is that it is specifically human suffering.
Another way to put this is that I’m defending, or trying to steel-man, the claim that the fact that a human’s suffering is human gives us a reason all on its own to think that that suffering is ethically significant. While nothing about an animal’s suffering being animal suffering gives us a reason all on its own to think that that suffering is ethically significant. We could still have other reasons to think it so, so the ‘infinite torture’ objection doesn’t necessarily land.
We can discuss that world from this one.
You seem to be using ‘anthropocentric’ to mean ‘humans are the ultimate arbiters or sources of morality’. I’m using ‘anthropocentric’ instead to mean ‘only human experiences matter’. Then by definition it doesn’t matter whether non-humans are tortured, except insofar as this also diminishes humans’ welfare. This is the definition that seems relevant Qiaochu’s statement, “I am still not convinced that I should care about animal suffering.” The question isn’t why we should care; it’s whether we should care at all.
I don’t think which reasons happen to psychologically motivate us matters here. People can have bad reasons to do good things. More interesting is the question of whether our good reasons would all be human-related, but that too is independent of Qiaochu’s question.
No, the latter was an afterthought. The discussion begins here.
Ah, okay, to be clear, I’m not defending this view. I think it’s a strawman.
I didn’t refer to psychological reasons. An example besides Kant’s (which is not psychological in the relevant sense) might be this: it is unethical to torture a cow because though cows have no ethical significance in and of themselves, they do have ethical significance as domesticated animals, who are wards of our society. But that’s just an example of such a reason.
I took the discussion to begin from Peter’s response to that comment, since that comment didn’t contain an argument, while Peter’s did. It would be weird for me to respond to Qiaochu’s request for an argument defending the moral significance of animal suffering by defending the idea that only human suffering is fundamental.
But this is getting to be a discussion about our discussion. I’m not tapping out, quite, but I would like us to move on to the actual conversation.
Not if you agreed with Qiaochu that no adequately strong reasons for caring about any non-human suffering have yet been presented. There’s no rule against agreeing with an OP.
Fair point, though we might be reading Qiaochu differently. I took him to be saying “I know of no reasons to take animal suffering as morally significant, though this is consistant with my treating it as if it is and with its actually being so.” I suppose you took him to be saying something more like “I don’t think there are any reasons to take animal suffering as morally significant.”
I don’t have good reasons to think my reading is better. I wouldn’t want to try and defend Qiaochu’s view if the second reading represents it.
If that was the case there would be no one to do the discussing.
Well, we could discuss that world from this one.
Yes, and we could, for example, assign that world no moral significance relative to our world.