Is it possible to simultaneously respect people’s wishes to live, and others’ wishes to die?
Transhumanists are fond of saying that they want to give everyone the choice of when and how they die. Giving people the choice to die is clearly preferable to our current situation, as it respects their autonomy, but it leads to the following moral dilemma.
Suppose someone loves essentially every moment of their life. For tens of thousands of years, they’ve never once wished that they did not exist. They’ve never had suicidal thoughts, and have always expressed a strong interest to live forever, until time ends and after that too. But on one very unusual day they feel bad for some random reason and now they want to die. It happens to the best of us every few eons or so.
Should this person be allowed to commit suicide?
One answer is yes, because that answer favors their autonomy. But another answer says no, because this day is a fluke. In just one day they’ll recover from their depression. Why let them die when tomorrow they will see their error? Or, as some would put it, why give them a permanent solution to a temporary problem?
There are a few ways of resolving the dilemma. First I’ll talk about a way that doesn’t resolve the dilemma. When I once told someone about this thought experiment, they proposed giving the person a waiting period. The idea was that if the person still wanted to die after the waiting period, then it was appropriate to respect their choice. This solution sounds fine, but there’s a flaw.
Say the probability that you are suicidal on any given day is one in a trillion, and each day is independent. Every normal day you love life and you want to live forever. However, even if we make the waiting period arbitrarily long, there’s a one hundred percent chance that you will die one day, even given your strong preference not to. It is guaranteed that eventually you will express the desire to commit suicide, and then independently during each day of the waiting period continue wanting to commit suicide, until you’ve waited out every day. Depending on the size of your waiting period, it may take googols of years for this to happen, but it will happen eventually.
So what’s a better way? Perhaps we could allow your current self to die but then after that, replace you with a backup copy from a day ago when you didn’t want to die. We could achieve this outcome by uploading a copy of your brain onto a computer each day, keeping it just in case future-you wants to die. This would solve the problem of you-right-now dying one day, because even if you decided to one day die, there would be a line of succession from your current self to future-you stretching out into infinity.
Yet others still would reject this solution, either because they don’t believe that uploads are “really them” or because they think that this solution still disrespects your autonomy. I will focus on the second objection. Consider someone who says, “If I really, truly, wanted to die, I would not consider myself dead if a copy from a day ago was animated and given existence. They are too close to me, and if you animated them, I would no longer be dead. Therefore you would not be respecting my wish to die.”
Is there a way to satisfy this person?
Alternatively, we could imagine setting up the following system: if someone wants to die, they are able to, but they must be uploaded and kept on file the moment before they die. Then, if at some point in the distant future, we predict that the world is such that they would have counterfactually wished to have been around rather than not existing, we reanimate them. Therefore, we fully respect their interests. If such a future never comes, then they will remain dead. But if a future comes that they would have wanted to be around to see, then they will be able to see it.
In this way, we are maximizing not only their autonomy, but also their hypothetical autonomy. For those who wished they had never been born, we can allow those people to commit suicide, and for those who do not exist but would have preferred existence if they did exist, we bring those people into existence. No one is dissatisfied with their state of affairs.
There are still a number of challenges to this view. We could first ask what mechanism we are using to predict whether someone would have wanted to exist, if they did exist. One obvious way is to simulate them, and then ask them “Do you prefer existing, or do you prefer not to exist?” But by simulating them, we are bringing them into existence, and therefore violating their autonomy if they say “I do not want to exist.”
There could be ways of prediction that do not rely on total simulation. But it is probably impossible to predict their answer perfectly if we did not perform a simulation. At best, we could be highly confident. But if we were wrong, and someone did want to come into existence, but we failed to predict that and so never did, this would violate their autonomy.
Another issue arises when we consider that there might always be a future that the person would prefer to exist. Perhaps, in the eternity of all existence, there will always eventually come a time where even the death-inclined would have preferred to exist. Are we then disrespecting their ancient choice to remain nonexistent forever? There seem to be no easy answers.
We have arrived at an Arrow’s impossibility theorem of sorts. Is there a way to simultaneously respect people’s wishes to live forever and respect people’s wishes to die, in a way that matches all of our intuitions? Perhaps not perfectly, but we could come close.
However, even if we make the waiting period arbitrarily long, there’s a one hundred percent chance that you will die one day, even given your strong preference not to.
Not if the waiting period gets longer over time (e.g. proportional to lifespan).
Good point. Although, there’s still a nonzero chance that they will die, if we continually extend the waiting period in some manner. And perhaps given their strong preference not to die, this is still violating their autonomy?
You don’t need it anywhere near as stark a contrast as this. In fact, it’s even harder if the agent (like many actual humans) has previously considered suicide, and has experienced joy that they didn’t do so, followed by periods of reconsideration. Intertemporal preference inconsistency is one effect of the fact that we’re not actually rational agents. Your question boils down to “when an agent has inconsistent preferences, how do we choose which to support?”
My answer is “support the versions that seem to make my future universe better”. If someone wants to die, and I think the rest of us would be better off if that someone lives, I’ll oppose their death, regardless of what they “really” want. I’ll likely frame it as convincing them they don’t really want to die, and use the fact that they didn’t want that in the past as “evidence”, but really it’s mostly me imposing my preferences.
There are some with whom I can have the altruistic conversation: future-you AND future-me both prefer you stick around. Do it for us? Even then, you can’t support any real person’s actual preferences, because they don’t exist. You can only support your current vision of their preferred-by-you preferences.
Is it possible to simultaneously respect people’s wishes to live, and others’ wishes to die?
Transhumanists are fond of saying that they want to give everyone the choice of when and how they die. Giving people the choice to die is clearly preferable to our current situation, as it respects their autonomy, but it leads to the following moral dilemma.
Suppose someone loves essentially every moment of their life. For tens of thousands of years, they’ve never once wished that they did not exist. They’ve never had suicidal thoughts, and have always expressed a strong interest to live forever, until time ends and after that too. But on one very unusual day they feel bad for some random reason and now they want to die. It happens to the best of us every few eons or so.
Should this person be allowed to commit suicide?
One answer is yes, because that answer favors their autonomy. But another answer says no, because this day is a fluke. In just one day they’ll recover from their depression. Why let them die when tomorrow they will see their error? Or, as some would put it, why give them a permanent solution to a temporary problem?
There are a few ways of resolving the dilemma. First I’ll talk about a way that doesn’t resolve the dilemma. When I once told someone about this thought experiment, they proposed giving the person a waiting period. The idea was that if the person still wanted to die after the waiting period, then it was appropriate to respect their choice. This solution sounds fine, but there’s a flaw.
Say the probability that you are suicidal on any given day is one in a trillion, and each day is independent. Every normal day you love life and you want to live forever. However, even if we make the waiting period arbitrarily long, there’s a one hundred percent chance that you will die one day, even given your strong preference not to. It is guaranteed that eventually you will express the desire to commit suicide, and then independently during each day of the waiting period continue wanting to commit suicide, until you’ve waited out every day. Depending on the size of your waiting period, it may take googols of years for this to happen, but it will happen eventually.
So what’s a better way? Perhaps we could allow your current self to die but then after that, replace you with a backup copy from a day ago when you didn’t want to die. We could achieve this outcome by uploading a copy of your brain onto a computer each day, keeping it just in case future-you wants to die. This would solve the problem of you-right-now dying one day, because even if you decided to one day die, there would be a line of succession from your current self to future-you stretching out into infinity.
Yet others still would reject this solution, either because they don’t believe that uploads are “really them” or because they think that this solution still disrespects your autonomy. I will focus on the second objection. Consider someone who says, “If I really, truly, wanted to die, I would not consider myself dead if a copy from a day ago was animated and given existence. They are too close to me, and if you animated them, I would no longer be dead. Therefore you would not be respecting my wish to die.”
Is there a way to satisfy this person?
Alternatively, we could imagine setting up the following system: if someone wants to die, they are able to, but they must be uploaded and kept on file the moment before they die. Then, if at some point in the distant future, we predict that the world is such that they would have counterfactually wished to have been around rather than not existing, we reanimate them. Therefore, we fully respect their interests. If such a future never comes, then they will remain dead. But if a future comes that they would have wanted to be around to see, then they will be able to see it.
In this way, we are maximizing not only their autonomy, but also their hypothetical autonomy. For those who wished they had never been born, we can allow those people to commit suicide, and for those who do not exist but would have preferred existence if they did exist, we bring those people into existence. No one is dissatisfied with their state of affairs.
There are still a number of challenges to this view. We could first ask what mechanism we are using to predict whether someone would have wanted to exist, if they did exist. One obvious way is to simulate them, and then ask them “Do you prefer existing, or do you prefer not to exist?” But by simulating them, we are bringing them into existence, and therefore violating their autonomy if they say “I do not want to exist.”
There could be ways of prediction that do not rely on total simulation. But it is probably impossible to predict their answer perfectly if we did not perform a simulation. At best, we could be highly confident. But if we were wrong, and someone did want to come into existence, but we failed to predict that and so never did, this would violate their autonomy.
Another issue arises when we consider that there might always be a future that the person would prefer to exist. Perhaps, in the eternity of all existence, there will always eventually come a time where even the death-inclined would have preferred to exist. Are we then disrespecting their ancient choice to remain nonexistent forever? There seem to be no easy answers.
We have arrived at an Arrow’s impossibility theorem of sorts. Is there a way to simultaneously respect people’s wishes to live forever and respect people’s wishes to die, in a way that matches all of our intuitions? Perhaps not perfectly, but we could come close.
Not if the waiting period gets longer over time (e.g. proportional to lifespan).
Good point. Although, there’s still a nonzero chance that they will die, if we continually extend the waiting period in some manner. And perhaps given their strong preference not to die, this is still violating their autonomy?
A person could be split on two parts: one that wants to die and other which to live. Then the first part is turned off.
You don’t need it anywhere near as stark a contrast as this. In fact, it’s even harder if the agent (like many actual humans) has previously considered suicide, and has experienced joy that they didn’t do so, followed by periods of reconsideration. Intertemporal preference inconsistency is one effect of the fact that we’re not actually rational agents. Your question boils down to “when an agent has inconsistent preferences, how do we choose which to support?”
My answer is “support the versions that seem to make my future universe better”. If someone wants to die, and I think the rest of us would be better off if that someone lives, I’ll oppose their death, regardless of what they “really” want. I’ll likely frame it as convincing them they don’t really want to die, and use the fact that they didn’t want that in the past as “evidence”, but really it’s mostly me imposing my preferences.
There are some with whom I can have the altruistic conversation: future-you AND future-me both prefer you stick around. Do it for us? Even then, you can’t support any real person’s actual preferences, because they don’t exist. You can only support your current vision of their preferred-by-you preferences.