In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.
In my case, I think that the relevant concept is human-level (or higher) intelligence. Of all the known species on Earth, humanity is the only one that I know to possess human-level or higher intelligence.
One potentially suitable test for human-level intelligence is the Turing test; due to their voice-mimic abilities, a parrot or a mynah bird may sound human at first, but it will not in general pass a Turing test.
Biological engineering on an almost-sufficiently-intelligent species (such as a dolphin) may lead to another suitably intelligent species with very little relation to a human.
That different races have effectively the same intellectual capacities is surely an important part of why we treat them as moral equals. But this doesn’t seem to me to be entirely necessary — young children and the mentally handicapped may deserve most (though not all) moral rights, while having a substantially lower level of intelligence. Intelligence might also turn out not to be sufficient; if a lot of why we care about other humans is that they can experience suffering and pleasure, and if intelligent behavior is possible without affective and evaluative states like those, then we might be able to build an AI that rivaled our intelligence but did not qualify as a moral patient, or did not qualify as one to the same extent as less-intelligent-but-more-suffering-prone entities.
Clearly, below-human-average intelligence is still worth something … so is there a cutoff point or what?
(I think you’re onto something with “intelligence”, but since intelligence varies, shouldn’t how much we care vary too? Shouldn’t there be some sort of sliding scale?)
That’s a very good question.
I don’t know.
Thinking through my mental landscape, I find that in most cases I value children (slightly) above adults. I think that this is more a matter of potential than anything else. I also put some value on an unborn human child, which could reasonably be said to have no intelligence at all (especially early on).
So, given that, I think that I put some fairly significant value on potential future intelligence as well as on present intelligence.
But, as you point out, below-human intelligence is still worth something.
...
I don’t think there’s really a firm cutoff point, such that one side is “worthless” and the other side is “worthy”. It’s a bit like a painting.
At one time, there’s a blank canvas, a paintbrush, and a pile of tubes of paint. At this point, it is not a painting. At a later time, there’s a painting. But there isn’t one particular moment, one particular stroke of the brush, when it goes from “not-a-painting” to “painting”. Similarly for intelligence; there isn’t any particular moment when it switches automatically from “worthless” to “worthy”.
If I’m going to eat meat, I have to find the point at which I’m willing to eat it by some other means than administering I.Q. tests (especially as, when I’m in the supermarket deciding whether or not to purchase a steak, it’s a bit late to administer any tests to the cow). Therefore, I have to use some sort of proxy measurement with correlation to intelligence instead. For the moment, i.e. until some other species is proven to have human-level or near-human intelligence, I’m going to continue to use ‘species’ as my proxy measurement.