While doing some reading on philosophy I came across some interesting questions about the nature of having desires and preferences. One, do you still have preferences and desires when you are unconscious? Two, if you don’t does this call into question the many moral theories that hold that having preferences and desires is what makes one morally significant, since mistreating temporarily unconscious people seems obviously immoral?
Philosophers usually discuss this question when debating the morality of abortion, but to avoid doing any mindkilling I won’t mention that topic, except to say in this sentence that I won’t mention it.
In more detail the issue is: A common, intuitive, and logical-seeming explanation for why it is immoral to destroy a typical human being, but not to destroy a rock, is that a typical human being has certain desires (or preferences or values, whatever you wish to call them, I’m using the terms interchangably) that they wish to fulfill, and destroying them would hinder the fulfillment of these desires. A rock, by contrast does not have any such desires so it is not harmed by being destroyed. The problem with this is that it also seems immoral to harm a human being who is asleep, or is in a temporary coma. And, on the face of it, it seems plausible to say that an unconscious person does not have any desires. (And of course it gets even weirder when considering far-out concepts like a brain emulator that is saved to a hard drive, but isn’t being run at the moment)
After thinking about this it occurred to me that this line of reasoning could be taken further. If I am not thinking about my car at the moment, can I still be said to desire that it is not stolen? Do I stop having desires about things the instant my attention shifts away from them?
I have compiled a list of possible solutions to this problem, ranked in order from least plausible to most plausible.
1. One possibility would be to consider it immoral to harm a sleeping person because if they will have desires in the future, even if they don’t now. I find this argument extremely implausible because it has some extremely bizarre implications, some of which may lead to insoluble moral contradictions. For instance, this argument could be used to argue that it is immoral to destroy skin cells because it is possible to use them to clone a new person, who will eventually grow up to have desires.
Furthermore, when human beings eventually gain the ability to build AIs that possess desires, this solution interacts with the orthogonality thesis in a catastrophic fashion. If it is possible to build an AI with any utility function, then for every potential AI one can construct, there is another potential AI that desires the exact opposite of that AI. That leads to total paralysis, since for every set potential set of desires we are capable of satisfying there is another potential set that would be horribly thwarted.
Lastly, this argument implies that you can, (and may be obligated to) help someone who doesn’t exist, and never has existed, by satisfying their non-personal preferences, without ever having to bother with actually creating them. This seem strange, I can maybe see an argument for respecting the once-existant preferences of those who are dead, but respecting the hypothetical preferences of the never-existed seems absurd. It also has the same problems with the orthogonality thesis that I mentioned earlier.
2. Make the same argument as solution 1, but somehow define the categories more narrowly so that an unconscious person’s ability to have desires in the future differs from that of an uncloned skin cell or an unbuilt AI. Michael Tooley has tried to do this by discerning between things that have the “possibility” of becoming a person with desires (i.e skin cells) and those that have the “capacity” to have desires. This approach has been criticized, and I find myself pessimistic about it because categories have a tendency to be “fuzzy” in real life and not have sharp borders.
3. Another solution may be that desires that one has had in the past continue to count, even when one is unconscious or not thinking about them. So it’s immoral to harm unconscious people because before they were unconscious they had a desire not to be harmed, and it’s immoral to steal my car because I desired that it not be stolen earlier when I was thinking about it.
I find this solution fairly convincing. The only major quibble I have with it is that it gives what some might consider a counter-intuitive result on a variation of the sleeping person question. Imagine a nano-factory manufacturers a sleeping person. This person is a new and distinct individual, and when they wake up they will proceed to behave as a typical human. This solution may suggest that it is okay to kill them before they wake up, since they haven’t had any desires yet, which does seem odd.
4. Reject the claim that one doesn’t have desires when one is unconscious, or when one is not thinking about a topic. The more I think about this solution, the more obvious it seems. Generally when I am rationally deliberating about whether or not I desire something I consider how many of my values and ideaks it fulfills. It seems like my list of values and ideals remains fairly constant, and that even if I am focusing my attention on one value at a time it makes sense to say that I still “have” the other values I am not focusing on at the moment.
Obviously I don’t think that there’s some portion of my brain where my “values” are stored in a neat little Excel spreadsheet. But they do seem to be a persistent part of its structure in some fashion. And it makes sense that they’d still be part of its structure when I’m unconscious. If they weren’t, wouldn’t my preferences change radically every time I woke up?
In other words, it’s bad to harm an unconscious person because they have desires, preferences, values, whatever you wish to call them, that harming them would violate. And those values are a part of the structure of their mind that doesn’t go away when they sleep. Skin cells and unbuilt AIs, by contrast, have no such values.
Now, while I think that explanation 4 resolves the issue of desires and unconsciousness best, I do think solution 3 has a great deal of truth to it as well (For instance, I tend to respect the final wishes of a dead person because they had desires in the past, even if they don’t now). The solutions 3 and 4 are not incompatible at all, so one can believe in both of them.
I’m curious as to what people think of my possible solutions. Am I right about people still having something like desires in their brain when they are unconscious?
Desires You’re Not Thinking About at the Moment
While doing some reading on philosophy I came across some interesting questions about the nature of having desires and preferences. One, do you still have preferences and desires when you are unconscious? Two, if you don’t does this call into question the many moral theories that hold that having preferences and desires is what makes one morally significant, since mistreating temporarily unconscious people seems obviously immoral?
Philosophers usually discuss this question when debating the morality of abortion, but to avoid doing any mindkilling I won’t mention that topic, except to say in this sentence that I won’t mention it.
In more detail the issue is: A common, intuitive, and logical-seeming explanation for why it is immoral to destroy a typical human being, but not to destroy a rock, is that a typical human being has certain desires (or preferences or values, whatever you wish to call them, I’m using the terms interchangably) that they wish to fulfill, and destroying them would hinder the fulfillment of these desires. A rock, by contrast does not have any such desires so it is not harmed by being destroyed. The problem with this is that it also seems immoral to harm a human being who is asleep, or is in a temporary coma. And, on the face of it, it seems plausible to say that an unconscious person does not have any desires. (And of course it gets even weirder when considering far-out concepts like a brain emulator that is saved to a hard drive, but isn’t being run at the moment)
After thinking about this it occurred to me that this line of reasoning could be taken further. If I am not thinking about my car at the moment, can I still be said to desire that it is not stolen? Do I stop having desires about things the instant my attention shifts away from them?
I have compiled a list of possible solutions to this problem, ranked in order from least plausible to most plausible.
1. One possibility would be to consider it immoral to harm a sleeping person because if they will have desires in the future, even if they don’t now. I find this argument extremely implausible because it has some extremely bizarre implications, some of which may lead to insoluble moral contradictions. For instance, this argument could be used to argue that it is immoral to destroy skin cells because it is possible to use them to clone a new person, who will eventually grow up to have desires.
Furthermore, when human beings eventually gain the ability to build AIs that possess desires, this solution interacts with the orthogonality thesis in a catastrophic fashion. If it is possible to build an AI with any utility function, then for every potential AI one can construct, there is another potential AI that desires the exact opposite of that AI. That leads to total paralysis, since for every set potential set of desires we are capable of satisfying there is another potential set that would be horribly thwarted.
Lastly, this argument implies that you can, (and may be obligated to) help someone who doesn’t exist, and never has existed, by satisfying their non-personal preferences, without ever having to bother with actually creating them. This seem strange, I can maybe see an argument for respecting the once-existant preferences of those who are dead, but respecting the hypothetical preferences of the never-existed seems absurd. It also has the same problems with the orthogonality thesis that I mentioned earlier.
2. Make the same argument as solution 1, but somehow define the categories more narrowly so that an unconscious person’s ability to have desires in the future differs from that of an uncloned skin cell or an unbuilt AI. Michael Tooley has tried to do this by discerning between things that have the “possibility” of becoming a person with desires (i.e skin cells) and those that have the “capacity” to have desires. This approach has been criticized, and I find myself pessimistic about it because categories have a tendency to be “fuzzy” in real life and not have sharp borders.
3. Another solution may be that desires that one has had in the past continue to count, even when one is unconscious or not thinking about them. So it’s immoral to harm unconscious people because before they were unconscious they had a desire not to be harmed, and it’s immoral to steal my car because I desired that it not be stolen earlier when I was thinking about it.
I find this solution fairly convincing. The only major quibble I have with it is that it gives what some might consider a counter-intuitive result on a variation of the sleeping person question. Imagine a nano-factory manufacturers a sleeping person. This person is a new and distinct individual, and when they wake up they will proceed to behave as a typical human. This solution may suggest that it is okay to kill them before they wake up, since they haven’t had any desires yet, which does seem odd.
4. Reject the claim that one doesn’t have desires when one is unconscious, or when one is not thinking about a topic. The more I think about this solution, the more obvious it seems. Generally when I am rationally deliberating about whether or not I desire something I consider how many of my values and ideaks it fulfills. It seems like my list of values and ideals remains fairly constant, and that even if I am focusing my attention on one value at a time it makes sense to say that I still “have” the other values I am not focusing on at the moment.
Obviously I don’t think that there’s some portion of my brain where my “values” are stored in a neat little Excel spreadsheet. But they do seem to be a persistent part of its structure in some fashion. And it makes sense that they’d still be part of its structure when I’m unconscious. If they weren’t, wouldn’t my preferences change radically every time I woke up?
In other words, it’s bad to harm an unconscious person because they have desires, preferences, values, whatever you wish to call them, that harming them would violate. And those values are a part of the structure of their mind that doesn’t go away when they sleep. Skin cells and unbuilt AIs, by contrast, have no such values.
Now, while I think that explanation 4 resolves the issue of desires and unconsciousness best, I do think solution 3 has a great deal of truth to it as well (For instance, I tend to respect the final wishes of a dead person because they had desires in the past, even if they don’t now). The solutions 3 and 4 are not incompatible at all, so one can believe in both of them.
I’m curious as to what people think of my possible solutions. Am I right about people still having something like desires in their brain when they are unconscious?