Let’s imagine a life extension drug has been discovered. One dose of this drug extends one’s life by 49.99 years. This drug also has a mild cumulative effect, if it has been given to someone who has been dosed with it before it will extend their life by 50 years.
Under these constraints the most efficient way to maximize the amount of life extension this drug can produce is to give every dose to one individual. If there was one dose available for all seven-billion people alive on Earth then giving every person one dose would result in a total of 349,930,000,000 years of life gained. If one person was given all the doses a total of 349,999,999,999.99 years of life would be gained. Sharing the life extension drug equally would result in a net loss of almost 70 million years of life. If you’re concerned about people’s reaction to this policy then we could make it a big lottery, where every person on Earth gets a chance to gamble their dose for a chance at all of them.
Now, one could make certain moral arguments in favor of sharing the drug. I’ll get to those later. However, it seems to me that gambling your dose for a chance at all of them isn’t rational from a purely self-interested point of view either. You will not win the lottery. Your chances of winning this particular lottery are almost 7,000 times worse than your chances of winning the powerball jackpot. If someone gave me a dose of the drug, and then offered me a chance to gamble in this lottery, I’d accuse them of Pascal’s mugging.
Here’s an even scarier thought experiment. Imagine we invent the technology for whole brain emulation. Let “x” equal the amount of resources it takes to sustain a WBE through 100 years of life. Let’s imagine that with this particular type of technology, it costs 10x to convert a human into a WBE and it costs 100x to sustain a biological human through the course of their natural life. Let’s have the cost of making multiple copies of a WBE once they have been converted be close to 0.
Again, under these constraints it seems like the most effective way to maximize the amount of life extension done is to convert one person into a WBE, then kill everyone else and use the resources that were sustaining them to make more WBEs, or extend the life of more WBEs. Again, if we are concerned about people’s reaction to this policy we could make it a lottery. And again, if I was given a chance to play in this lottery I would turn it down and consider it a form of Pascal’s mugging.
I’m sure that most readers, like myself, would find these policies very objectionable. However, I have trouble finding objections to them from the perspective of classical utilitarianism. Indeed, most people have probably noticed that these scenarios are very similar to Nozick’s “utility monster” thought experiment. I have made a list of possible objections to these scenarios that I have been considering:
1. First, let’s deal with the unsatisfying practical objections. In the case of the drug example, it seems likely that a more efficient form of life extension will likely be developed in the future. In that case it would be better to give everyone the drug to sustain them until that time. However, this objection, like most practical ones, seems unsatisfying. It seems like there are strong moral objections to not sharing the drug.
Another pragmatic objection is that, in the case of the drug scenario, the lucky winner of the lottery might miss their friends and relatives who have died. And in the WBE scenario it seems like the lottery winner might get lonely being the only person on Earth. But again, this is unsatisfying. If the lottery winner were allowed to share their winnings with their immediate social circle, or if they were a sociopathic loner who cared nothing for others, it still seems bad that they end up killing everyone else on Earth.
2. One could use the classic utilitarian argument in favor of equality: diminishing marginal utility. However, I don’t think this works. Humans don’t seem to experience diminishing returns from lifespan in the same way they do from wealth. It’s absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30 (all other things being equal). The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it. Extended lifespan removes this restriction, making a longer-lived person essentially a utility monster.
3. My intuitions about the lottery could be mistaken. It seems to me that if I was offered the possibility of gambling my dose of life extension drug with just one other person, I still wouldn’t do it. If I understand probabilities correctly, then gambling for a chance at living either 0 or 99.99 additional years is equivalent to having a certainty of an additional 49.995 years of life, which is better than the certainty of 49.99 years of life I’d have if I didn’t make the gamble. But I still wouldn’t do it, partly because I’d be afraid I’d lose and partly because I wouldn’t want to kill the person I was gambling with.
So maybe my horror at these scenarios is driven by that same hesitancy. Maybe I just don’t understand the probabilities right. But even if that is the case, even if it is rational for me to gamble my dose with just one other person, it doesn’t seem like the gambling would scale. I will not win the “lifetime lottery.”
4. Finally, we have those moral objections I mentioned earlier. Utilitarianism is a pretty awesome moral theory under most circumstances. However, when it is applied to scenarios involving population growth and scenarios where one individual is vastly better at converting resources into utility than their fellows, it tends to produce very scary results. If we accept the complexity of value thesis (and I think we should), this suggests that there are other moral values that are not salient in the “special case” of scenarios with no population growth or utility monsters, but become relevant in scenarios where there are.
For instance, it may be that prioritarianism is better than pure utilitarianism, and in this case sharing the life extension method might be best because of the benefits it accords the least off. Or it may be (in the case of the WBE example) that having a large number of unique, worthwhile lives in the world is valuable because it produces experiences like love, friendship, and diversity.
My tentative guess at the moment is that there probably are some other moral values that make the scenarios I described morally suboptimal, even though they seem to make sense from a utilitarian perspective. However, I’m interested in what other people think. Maybe I’m missing something really obvious.
EDIT: To make it clear, when I refer to “amount of years added” I am assuming for simplicity’s sake that all the years added are years that the person whose life is being extended wants to live and contain a large amount of positive experiences. I’m not saying that lifespan is exactly equivalent to utility. The problem I am trying to resolve is that it seems like the scenarios I’ve described seem to maximize the number of positive events it is possible for the people in the scenario to experience, even though they involve killing the majority of people involved. I’m not sure “positive experiences” is exactly equivalent to “utility” either, but it’s likely a much closer match than lifespan.
Some scary life extension dilemmas
Let’s imagine a life extension drug has been discovered. One dose of this drug extends one’s life by 49.99 years. This drug also has a mild cumulative effect, if it has been given to someone who has been dosed with it before it will extend their life by 50 years.
Under these constraints the most efficient way to maximize the amount of life extension this drug can produce is to give every dose to one individual. If there was one dose available for all seven-billion people alive on Earth then giving every person one dose would result in a total of 349,930,000,000 years of life gained. If one person was given all the doses a total of 349,999,999,999.99 years of life would be gained. Sharing the life extension drug equally would result in a net loss of almost 70 million years of life. If you’re concerned about people’s reaction to this policy then we could make it a big lottery, where every person on Earth gets a chance to gamble their dose for a chance at all of them.
Now, one could make certain moral arguments in favor of sharing the drug. I’ll get to those later. However, it seems to me that gambling your dose for a chance at all of them isn’t rational from a purely self-interested point of view either. You will not win the lottery. Your chances of winning this particular lottery are almost 7,000 times worse than your chances of winning the powerball jackpot. If someone gave me a dose of the drug, and then offered me a chance to gamble in this lottery, I’d accuse them of Pascal’s mugging.
Here’s an even scarier thought experiment. Imagine we invent the technology for whole brain emulation. Let “x” equal the amount of resources it takes to sustain a WBE through 100 years of life. Let’s imagine that with this particular type of technology, it costs 10x to convert a human into a WBE and it costs 100x to sustain a biological human through the course of their natural life. Let’s have the cost of making multiple copies of a WBE once they have been converted be close to 0.
Again, under these constraints it seems like the most effective way to maximize the amount of life extension done is to convert one person into a WBE, then kill everyone else and use the resources that were sustaining them to make more WBEs, or extend the life of more WBEs. Again, if we are concerned about people’s reaction to this policy we could make it a lottery. And again, if I was given a chance to play in this lottery I would turn it down and consider it a form of Pascal’s mugging.
I’m sure that most readers, like myself, would find these policies very objectionable. However, I have trouble finding objections to them from the perspective of classical utilitarianism. Indeed, most people have probably noticed that these scenarios are very similar to Nozick’s “utility monster” thought experiment. I have made a list of possible objections to these scenarios that I have been considering:
1. First, let’s deal with the unsatisfying practical objections. In the case of the drug example, it seems likely that a more efficient form of life extension will likely be developed in the future. In that case it would be better to give everyone the drug to sustain them until that time. However, this objection, like most practical ones, seems unsatisfying. It seems like there are strong moral objections to not sharing the drug.
Another pragmatic objection is that, in the case of the drug scenario, the lucky winner of the lottery might miss their friends and relatives who have died. And in the WBE scenario it seems like the lottery winner might get lonely being the only person on Earth. But again, this is unsatisfying. If the lottery winner were allowed to share their winnings with their immediate social circle, or if they were a sociopathic loner who cared nothing for others, it still seems bad that they end up killing everyone else on Earth.
2. One could use the classic utilitarian argument in favor of equality: diminishing marginal utility. However, I don’t think this works. Humans don’t seem to experience diminishing returns from lifespan in the same way they do from wealth. It’s absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30 (all other things being equal). The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it. Extended lifespan removes this restriction, making a longer-lived person essentially a utility monster.
3. My intuitions about the lottery could be mistaken. It seems to me that if I was offered the possibility of gambling my dose of life extension drug with just one other person, I still wouldn’t do it. If I understand probabilities correctly, then gambling for a chance at living either 0 or 99.99 additional years is equivalent to having a certainty of an additional 49.995 years of life, which is better than the certainty of 49.99 years of life I’d have if I didn’t make the gamble. But I still wouldn’t do it, partly because I’d be afraid I’d lose and partly because I wouldn’t want to kill the person I was gambling with.
So maybe my horror at these scenarios is driven by that same hesitancy. Maybe I just don’t understand the probabilities right. But even if that is the case, even if it is rational for me to gamble my dose with just one other person, it doesn’t seem like the gambling would scale. I will not win the “lifetime lottery.”
4. Finally, we have those moral objections I mentioned earlier. Utilitarianism is a pretty awesome moral theory under most circumstances. However, when it is applied to scenarios involving population growth and scenarios where one individual is vastly better at converting resources into utility than their fellows, it tends to produce very scary results. If we accept the complexity of value thesis (and I think we should), this suggests that there are other moral values that are not salient in the “special case” of scenarios with no population growth or utility monsters, but become relevant in scenarios where there are.
For instance, it may be that prioritarianism is better than pure utilitarianism, and in this case sharing the life extension method might be best because of the benefits it accords the least off. Or it may be (in the case of the WBE example) that having a large number of unique, worthwhile lives in the world is valuable because it produces experiences like love, friendship, and diversity.
My tentative guess at the moment is that there probably are some other moral values that make the scenarios I described morally suboptimal, even though they seem to make sense from a utilitarian perspective. However, I’m interested in what other people think. Maybe I’m missing something really obvious.
EDIT: To make it clear, when I refer to “amount of years added” I am assuming for simplicity’s sake that all the years added are years that the person whose life is being extended wants to live and contain a large amount of positive experiences. I’m not saying that lifespan is exactly equivalent to utility. The problem I am trying to resolve is that it seems like the scenarios I’ve described seem to maximize the number of positive events it is possible for the people in the scenario to experience, even though they involve killing the majority of people involved. I’m not sure “positive experiences” is exactly equivalent to “utility” either, but it’s likely a much closer match than lifespan.