The premise that the reason we do not kill people is because they have desires seems deeply flawed. I don’t kill people because they are people, and I have a term in my utility function that cares about people. Thinking about it, this term doesn’t care about arbitrary desire, but rather about people specifically. (Not necessarily humans, of course. For the same reason, I would not want to kill a sentient alien or AI.) If it were desires that matter, that would mean a bunch of extremely unintuitive things, far beyond what you cover here.
For example:
This premise implies that if someone is easygoing and carefree, it is a lot less bad than if you kill your normal average person. To me, at lest, this conclusion seems rather repugnant. Do carefree people have a lesser moral standing? That is far from obvious.
(Or what about animals? from what we can observe, animals have plenty of desires, almost as much or as much as humans. If we really were using desire as our metric of moral worth, we would have to value animal lives at a very high rate. While I do believe that humanity should treat animals better then they are treated now, I don’t think anyone seriously believes that they should be given the same (or very similar) moral weight as humans.)
The “People” argument, once you taboo “people”, becomes pretty convoluted; it is to some extent the question of what constitutes a person which the “desires” perspective seeks to answer.
Additionally, if we treat “desires” as a qualitative rather than quantitative measure (“Has desires” rather than “How many desires”), one of your rejections goes away.
That said, I agree with a specific form of this argument, which is that “Desires experienced” isn’t a good measure of moral standing, because it fails to add up to normality; it doesn’t include everything we’d like to include, and it doesn’t exclude everything we’d like to exclude.
The “People” argument, once you taboo “people”, becomes pretty convoluted; it is to some extent the question of what constitutes a person which the “desires” perspective seeks to answer.
If I cared about “desires” then I would expect to treat cats and dogs analogous to how I treat humans, and this is patently false if you observe my behavior. Clearly I value “humans”, not “animals with desires”. Defining human might be beyond me, but I still seem to know them when I see ’em.
this is patently false if you observe my behavior.
Unless you have an insanely low level of akrasia, I’d be wary of using your behavior as a guide to your values.
I would expect to treat cats and dogs analogous to how I treat humans, and this is patently false if you observe my behavior. Clearly I value “humans”, not “animals with desires”
Not necessarily. If animals desire radically different things from humans then you’d treat them differently even if you valued their desires equally. I don’t think dogs and cats animals have the same sort of complex desires humans do, they seem to value attention and food and disvalue pain, fear, and hunger. So as long as you don’t actively mistreat animals you are probably respecting their desires.
If a dog walked up to you and demonstrated that it could read, write, and communicate with you, and seemed to have a genius level IQ, and expressed a desire to go to college and learn theoretical physics, wouldn’t you treat it more like a human and less like a normal dog?
Unless you have an insanely low level of akrasia, I’d be wary of using your behavior as a guide to your values.
I’m not saying “having desires” isn’t a factor somewhere, but I’m not a vegetarian so clearly I don’t mind killing animals. I have no de-facto objection to eating dog meat instead of cow meat, but I’d be appalled to eat human. As near as I can tell, this applies exclusively to humans. I strongly suspect I’d be bothered to eat a talking dog, but I suspect both the talking and non-talking dogs have a desire not to be my dinner. The pertinent difference there seems to be communication, not desire.
I’m fine calling the relevant trait “being human” since, in this reality, it’s an accurate generalization. I’m fine being wrong in the counter-factual “Dog’s Talk” reality, since I don’t live there. If I ever find myself living in a world with beings that are both (!human AND !dinner), I’ll re-evaluate what traits contribute. Until then, I have enough evidence to rule out “desire”, and insufficient evidence to propose anything other than “human” as a replacement :)
Most of the time. Unfortunately a definition that works “most of the time” is wholly unworkable. Note that the “desire” definition arose out of the abortion debate.
Do not consider this an insistence that you provide a viable alternate, rather, an insistence that you provide one only if you find it to be a viable alternative.
I’ve heard a common argument post-tabooing-”people” to be “I care about things that are close in thing-space to me. “People” is just a word I use for algorithms that run like me.” (This is pretty much how I function in a loose sense, actually)
There is something I am having that I label “subjective experience.” I value this thing in myself and others. That I can’t completely specify it doesn’t matter much, I can’t fully specify most of my values.
I didn’t suggest you were the only one having a subjective experience. I suggested that what you -label- a subjective experience may not be experienced by others.
Are you seeing the same red I am? Maybe. That doesn’t stop us from using a common label to refer to an objective phenomenon.
Similarly, whatever similarities you think you share with other people may be the product of a common label referring to an objective phenomenon, experienced subjectively differently. And that can include subjective existence. The qualities of subjective existence you value are not necessarily present in every possible subjective existence.
You shouldn’t try to taboo “people”. Actual human brains really do think in terms of the category “people”. If the world changes and the category no longer carves it at its joints (say, if superhuman AI is developed), human brains will remain to some extent hardwired with their category of “people”. The only answer to the question of what constitutes a person is to go look at how human brains pattern-match things to recognize persons, which is that they look and behave like humans.
That kind of attitude is an extremely effective way of -preventing- you from developing superhuman AI, or at least the kind you’d -want- to develop. Your superhuman AI needs to know the difference between plucked chickens and Greek philosophers.
If you try to formalize what “people” or “morally valuable agents” are—also known as tabooing the word “people”—then you run into problems with bad definitions that don’t match your intuition and maybe think plucked chickens are people.
That’s exactly why I’m arguing that you should not formalize or taboo “people”, because it’s not a natural category; it’s something that is best defined by pointing to a human brain and saying “whatever the brain recognizes as people, that’s people”.
As OrphanWilde said in their reply, when I say that it is bad to kill people because they have desires that killing them would thwart, what I was actually trying to do is taboo “person” and figure out what makes someone a person. And I think “an entity that has desires” is one of the best definitions of “person” that we’ve come up with so far. This view is shared by many philosophers, see Wikipedia’s entry on “personism” for instance.
For example: This premise implies that if someone is easygoing and carefree, it is a lot less bad than if you kill your normal average person. To me, at lest, this conclusion seems rather repugnant. Do carefree people have a lesser moral standing? That is far from obvious.
I don’t regard the quantity of desires someone has as being what makes it wrong to kill them. Rather it is that they have future-directed preferences at all. In other words, being able to have desires is part of (or maybe all of, I’m not sure) what makes you a “person,” and killing a person is bad.
Also, if the quantity of desires was what made you morally significant you could increase your moral significance by subdividing your desires. For instance, if I was arguing with someone over the last slice of pizza, it would not be a morally valid argument to say “I want to eat the crust, cheese, and sauce, while you just want to eat the pizza. Three desires trumps one!”
So it’s having desires at all that make us morally significant (ie make us persons), not how many we have.
Or what about animals? from what we can observe, animals have plenty of desires, almost as much or as much as humans. If we really were using desire as our metric of moral worth, we would have to value animal lives at a very high rate.
What I think gives humans more moral weight than animals is that we are capable of conceiving of the future and having preferences about how the future will go. Most animals do not. Killing a human is bad because of all their future-directed preferences and life-goals that will be thwarted. Most animals, by contrast, literally do not care whether they live or die (animals do behave in ways that result in their continued existence, but these activities seem motivated by a desire to gain pleasure and avoid pain, rather than prudence about the future). So killing an animal generally does not have the same level of badness as killing a human does (animals can feel pleasure and pain however, so the method of killing them had better be painless).
Of course, there might be a few species of animals that can conceive of the future and have preferences about it (the great apes, for instance). I see no difference between killing those animals and killing a human who is mildly retarded.
The premise that the reason we do not kill people is because they have desires seems deeply flawed. I don’t kill people because they are people, and I have a term in my utility function that cares about people. Thinking about it, this term doesn’t care about arbitrary desire, but rather about people specifically. (Not necessarily humans, of course. For the same reason, I would not want to kill a sentient alien or AI.) If it were desires that matter, that would mean a bunch of extremely unintuitive things, far beyond what you cover here.
For example: This premise implies that if someone is easygoing and carefree, it is a lot less bad than if you kill your normal average person. To me, at lest, this conclusion seems rather repugnant. Do carefree people have a lesser moral standing? That is far from obvious.
(Or what about animals? from what we can observe, animals have plenty of desires, almost as much or as much as humans. If we really were using desire as our metric of moral worth, we would have to value animal lives at a very high rate. While I do believe that humanity should treat animals better then they are treated now, I don’t think anyone seriously believes that they should be given the same (or very similar) moral weight as humans.)
The “People” argument, once you taboo “people”, becomes pretty convoluted; it is to some extent the question of what constitutes a person which the “desires” perspective seeks to answer.
Additionally, if we treat “desires” as a qualitative rather than quantitative measure (“Has desires” rather than “How many desires”), one of your rejections goes away.
That said, I agree with a specific form of this argument, which is that “Desires experienced” isn’t a good measure of moral standing, because it fails to add up to normality; it doesn’t include everything we’d like to include, and it doesn’t exclude everything we’d like to exclude.
If I cared about “desires” then I would expect to treat cats and dogs analogous to how I treat humans, and this is patently false if you observe my behavior. Clearly I value “humans”, not “animals with desires”. Defining human might be beyond me, but I still seem to know them when I see ’em.
Unless you have an insanely low level of akrasia, I’d be wary of using your behavior as a guide to your values.
Not necessarily. If animals desire radically different things from humans then you’d treat them differently even if you valued their desires equally. I don’t think dogs and cats animals have the same sort of complex desires humans do, they seem to value attention and food and disvalue pain, fear, and hunger. So as long as you don’t actively mistreat animals you are probably respecting their desires.
If a dog walked up to you and demonstrated that it could read, write, and communicate with you, and seemed to have a genius level IQ, and expressed a desire to go to college and learn theoretical physics, wouldn’t you treat it more like a human and less like a normal dog?
I’m not saying “having desires” isn’t a factor somewhere, but I’m not a vegetarian so clearly I don’t mind killing animals. I have no de-facto objection to eating dog meat instead of cow meat, but I’d be appalled to eat human. As near as I can tell, this applies exclusively to humans. I strongly suspect I’d be bothered to eat a talking dog, but I suspect both the talking and non-talking dogs have a desire not to be my dinner. The pertinent difference there seems to be communication, not desire.
I’m fine calling the relevant trait “being human” since, in this reality, it’s an accurate generalization. I’m fine being wrong in the counter-factual “Dog’s Talk” reality, since I don’t live there. If I ever find myself living in a world with beings that are both (!human AND !dinner), I’ll re-evaluate what traits contribute. Until then, I have enough evidence to rule out “desire”, and insufficient evidence to propose anything other than “human” as a replacement :)
Most of the time. Unfortunately a definition that works “most of the time” is wholly unworkable. Note that the “desire” definition arose out of the abortion debate.
Do not consider this an insistence that you provide a viable alternate, rather, an insistence that you provide one only if you find it to be a viable alternative.
I think general relativity is pretty workable despite working “most of the time”.
I’ve heard a common argument post-tabooing-”people” to be “I care about things that are close in thing-space to me. “People” is just a word I use for algorithms that run like me.” (This is pretty much how I function in a loose sense, actually)
There is something I am having that I label “subjective experience.” I value this thing in myself and others. That I can’t completely specify it doesn’t matter much, I can’t fully specify most of my values.
You can’t even tell whether or not others have this thing that you label subjective experience. Are you sure you value it in others?
The world in which I am the only being having a subjective experience and everyone else is a p-zombie is ridiculously unlikely.
I didn’t suggest you were the only one having a subjective experience. I suggested that what you -label- a subjective experience may not be experienced by others.
Are you seeing the same red I am? Maybe. That doesn’t stop us from using a common label to refer to an objective phenomenon.
Similarly, whatever similarities you think you share with other people may be the product of a common label referring to an objective phenomenon, experienced subjectively differently. And that can include subjective existence. The qualities of subjective existence you value are not necessarily present in every possible subjective existence.
And when it results in predictions of different futures I’ll care.
You shouldn’t try to taboo “people”. Actual human brains really do think in terms of the category “people”. If the world changes and the category no longer carves it at its joints (say, if superhuman AI is developed), human brains will remain to some extent hardwired with their category of “people”. The only answer to the question of what constitutes a person is to go look at how human brains pattern-match things to recognize persons, which is that they look and behave like humans.
That kind of attitude is an extremely effective way of -preventing- you from developing superhuman AI, or at least the kind you’d -want- to develop. Your superhuman AI needs to know the difference between plucked chickens and Greek philosophers.
I think I don’t understand what you’re saying.
If you try to formalize what “people” or “morally valuable agents” are—also known as tabooing the word “people”—then you run into problems with bad definitions that don’t match your intuition and maybe think plucked chickens are people.
That’s exactly why I’m arguing that you should not formalize or taboo “people”, because it’s not a natural category; it’s something that is best defined by pointing to a human brain and saying “whatever the brain recognizes as people, that’s people”.
Are you going to put a human brain in your superhuman AI so it can use it for a reference?
I could if I had to. Or I could tell it to analyze some brains and remember the results.
As OrphanWilde said in their reply, when I say that it is bad to kill people because they have desires that killing them would thwart, what I was actually trying to do is taboo “person” and figure out what makes someone a person. And I think “an entity that has desires” is one of the best definitions of “person” that we’ve come up with so far. This view is shared by many philosophers, see Wikipedia’s entry on “personism” for instance.
I don’t regard the quantity of desires someone has as being what makes it wrong to kill them. Rather it is that they have future-directed preferences at all. In other words, being able to have desires is part of (or maybe all of, I’m not sure) what makes you a “person,” and killing a person is bad.
Also, if the quantity of desires was what made you morally significant you could increase your moral significance by subdividing your desires. For instance, if I was arguing with someone over the last slice of pizza, it would not be a morally valid argument to say “I want to eat the crust, cheese, and sauce, while you just want to eat the pizza. Three desires trumps one!”
So it’s having desires at all that make us morally significant (ie make us persons), not how many we have.
What I think gives humans more moral weight than animals is that we are capable of conceiving of the future and having preferences about how the future will go. Most animals do not. Killing a human is bad because of all their future-directed preferences and life-goals that will be thwarted. Most animals, by contrast, literally do not care whether they live or die (animals do behave in ways that result in their continued existence, but these activities seem motivated by a desire to gain pleasure and avoid pain, rather than prudence about the future). So killing an animal generally does not have the same level of badness as killing a human does (animals can feel pleasure and pain however, so the method of killing them had better be painless).
Of course, there might be a few species of animals that can conceive of the future and have preferences about it (the great apes, for instance). I see no difference between killing those animals and killing a human who is mildly retarded.