Summary: At a steady state of population, extended lifespan means taking resources away from other potential people. Technology for extended life may not be ethical in this case. Because we are not in steady state, this does not argue against working on life extension technology today.
Many people research radical life extension, dreaming of a day when we might live forever. But I fear that advocates of life extension have overlooked an important concern about immortality: is it ethical?
The argument here is simple, in a world where the population has reached a steady state, another year of life for me reduces the resources available to everyone else. This not only means less resources for existing humans, it also means there would be less resources to support a larger population. In this circumstance, choosing to live another year means preventing someone else from living another year. No one would argue against a 25-year-old choosing to live another year, but what if they are 1000 years old? In steady state population, if they chose to continue for another 1000 years, they are essentially preventing 10 normal human lifetimes within that span. If you were to treat potential or future humans as moral agents in their own right, it might make sense to ask the 1000-year-old to let someone new take their place.
The above argument makes two important assumptions. The most important is that population is in steady state. If population is growing, the above argument becomes much weaker since there are no major resource constraints on creating new life. The second assumption involves taking future generations and potential lives into account. If you don’t care about future people then it doesn’t matter if you are taking resources from them! Similarly, if you do not believe in taking into account your impact on potential human lives, it doesn’t matter that you could give your resources to them.
The first assumption seems like it could be reasonable in the far future (though it may be possible for humanity to continue expanding into space forever). The second assumption is a deep philosophical issue which I will not get into here, but it seems reasonable to me. Taking both of these assumptions as a given, I want to develop some intuition for this problem and suggest some solutions.
At it’s core, immortality is a problem of inequality. In a world of immortals, some people could enjoy centuries of life, while others will never be born due to resource shortages. It seems strange to imagine ghostly people waiting for their chance at life while living people enjoy their extended lifespans. Here is a better counterfactual:
-
A world with enough resources to be inhabited by 1 person (Bob) who lives for 10,000 years, spending his days engaging in solitary activities such as meditation and blogging.
-
A world with 100 people (including Bob) each living for 100 years, engaging in group activities such as chess tournaments and Shakespeare-in-the-park (in addition to solitary activities).
Note that these scenarios have the same total life-years and total resources consumed. It is clear that the first scenario involves much higher concentration of life into the hands of one person, whereas the second scenario is more egalitarian.
The second scenario also highlights the fact that with more people, there are more possible activities to do and more opportunities to make friends. This suggests that spreading life-years out more might lead to more overall enjoyment (even for Bob).
Another argument against scenario 1 calls upon on diminishing returns. What happens if Bob runs out of stuff to do after only 1000 years? What would you do for 10,000 years? Each additional year of life will become less and less exciting as you run out of things you haven’t tried. On the other hand, each year of life for a young person is full of wonder, everything is new. Wouldn’t it make sense for an immortal to donate a lifetime to a child? In this framing, scenario 2 looks like a much better deal.
Say we are convinced by the above arguments. What should the immortal citizens of this steady state population do? What laws would encourage people to share life? One simple approach is to allot each person the same number of years of life; after (say) 1000 years of life, a person is required to pass on and make room for someone new. A clever tax scheme could also be used, charging a larger tax for each additional year of life. A complete approach to this problem would require taking into account a persons benefits to society, the strength of their desire to live more, and the benefit of bringing a new person into the world.
This argument does not imply that we should stop trying to extend natural lifespan today, it would be pretty shocking if current the current human lifespan perfectly balanced happiness with equality. Rather, it is important to develop life extension technologies and then think carefully about how to use them. In fact, this might not even become a major issue once such technologies are built. It is possible that many people will choose to pass much sooner than they would be required to. Fortunately, since this future is a long way off, for now this problem is merely interesting to ponder.
I would like this better if you’d clarified which decisions or actions you’re evaluating. I’d also prefer you not imply that “is it ethical” is a question that makes any sense without more context. And finally, you should at least mention other explorations of population ethics, including Parfit’s Repugnant Conclusion (https://en.wikipedia.org/wiki/Mere_addition_paradox). That said, it’s always interesting to hear different positions on the topic, and I don’t want to discourage that. You’re clear that it’s ok (or maybe even good) to seek life extension in the current world. And you imply that there is some potential future state where it’s not OK. Where would you draw the line?
Note that your exploration doesn’t require immortality, only the decision of whether a given set of resources is used to enhance/extend one life, or to enhance/create/extend another. This is no different than any other resource allocation question, is it? Specifically, does your exploration address anything different from “if you choose to live another day, you use resources that would otherwise be available to someone else”? And is that significantly different from “if you buy a latte rather than drinking tap water, you use resources that would otherwise be available to someone else”?
I appreciate your feedback!
On reflection, I agree that this post could benefit from a clearer example framed as a decision. Of course, I have also sidestepped the discussion of ethical premises.
More context in population ethics would be nice but I wanted to avoid focusing on the Repugnant Conclusion because the goal was to examine a more neglected question: how do we trade off utility between existing lives and potentially-existing lives?
Sort of! It is mostly a resource allocation question if you are willing to trade utility between existing lives and potentially-existing lives (as I am). But many people disagree with this instinctively. Additionally, there are network effects to consider, in scenario 2 there are a lot more possible activities which are not possible in scenario 1.
I don’t have concrete answers on all of these questions, but most people seem to have a strong presumption for a world more like scenario 1, which seems unjustified.
This seems framed strangely to me. This seems mostly an issue we might face, say, in the far future where there is no more frontier, all available energy is being used to do something, and now we have to decide on tradeoffs between existing beings vs. possible future ones (or also on the relative size of beings or how much “cpu time” they get relative to “wall time”). As it stands today, this seems a non-issue since we are not at risk of preventing future people from existing by extending lives today (unless you have a compelling argument that this is the case).
I completely agree and I support further research on radical life extension (we might as well have the option and decide on the ethics later).
This is not a near-term issue in any sense, rather, it is more of an “ethical brainteaser” to identify different intuitions people have.
I believe I cound find enough fun for myself for at least 1000 years. Not sure about more, but I’d worry about that when I’d get there.
For example, writing a book, or coding a non-trivial program takes about a year. Studying something, five years. Becoming a master, ten years. I could easily spend a century just doing things that seem interesting to me now, and I suppose I would find new hobbies along the way. There are also thousands of books to read, movies to watch, computer games to play...
Now imagine an opposite thought experiment, a world run by intelligent robots, who produce kids in artificial wombs, let them explore the world for 1 year, and then kill them to make space for more kids. Does this sound like an exciting world-exploration utopia?
I do have one useful comment. Arguments like this are used to justify no attempt at the slightest life extension now. “since even if I could survive today, I might die at some far future date, perhaps when I get bored of life in 10k years or when the universe runs out of fuel in a few trillon years. So I’ll take oblivion right now, at a mere 75 or 85 years because in the end what’s the difference”
This argument seems flawed because quantities matter. Comparable to allowing a 5 year old to commit suicide because they are bored with life and want to make room for another 5 year old.
And second, while there might or might not be a solution to the heat death of the universe, there is a solution to your ’world is too crowded with immortals, kill them”. If you absolutely wanted to reduce crowding but were against death, merges are a viable solution.
Any technology that can fix every possible flaw a human body could encounter and keep their mind running through any insult or injury by it’s nature has the ability to capture, back up, store, and even merge human memories. A merge could be as simple as finding another immortal human with very similar personalities and life experiences and de-duplicating them. For example the trivial example is that if you made a clone of yourself yesterday, and both of you have only experienced today as a divergence, a merge would cause you both to remember today (and you would be reduced to a single merged-self running on one set of hardware). This is actually easier than it sounds if you think of memory as a simple set, where now you have 2 partially overlapping keys when you think of “today”. Just now you have 2 keys for today, one from yourself, one from your clone.
We can trivially do such merges with AI agents using neural networks today, this is how OpenAI does training in parallel over large clusters.
Some memories would become ambiguous “what did you do today”, but you would gain the skills update from both you and your clone self.
Anyways you can merge as often as needed—a little data gets lost in each merge—but you can use merges to reduce the population until it can support however many new individuals you believe is ethical.
Merges are a general solution to most problems you would encounter with a technology that allows for copying of memories.
Yes I definitely agree that this is not an argument against research on life extension.
Merging is very interesting, it seems that it would also prevent AI’s or humans who are self-copying too much. Agents could also merge in a temporal sense by say, alternating the days that they run, or by running more slowly (and thus using less resources).