Yet it seems like most people are not applying the same thought processes to life-extending technology.
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
This argument has been made a number of times. It takes the completely wrong view. The issue is foresight versus hindsight; The argument in the OP never says that only living to 100 is great and has amazing effects. Instead, the argument in the OP states that increasing life span could have devastating risks. Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
In other words, increasing life span from 100 to 800 could cause say over-population problems which could lead to societal or environmental collapse. Therefore it should be approached with caution. If however, you already have a society where the life span is 800 years, and society is functioning, then those risks are negated, and of course there would be no reason to kill people.
If however, life span was raised to 800 years, and it did cause over-population problems and they did threaten societal or environmental collapse, then yeah, I might advocate killing off some people in order to save the whole of society.
A more understandable version of foresight v. hindsight is that modern people, with hindsight, know it would’ve probably been better to not be so harsh on the Germans in the Treaty of Versailles after WWI. However the Allied Powers did not have access to knowledge about the consequences of their actions, they could not apply that unknown knowledge to their decisions.
Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1⁄8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn’t.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go “don’t worry about contraceptives—let’s just kill 7⁄8 of the humans on earth.”
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it’s developed there’s no going back. If the technology exists, people will stop at nothing to use it. You won’t be able to control it.
Just like AI, life-extending technologies are not inherently “bad”. But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it’s out of the box, it’s too late.
I was thinking about your post and these parts don’t sound convincing enough to me. You can make a Police State Society that stops every person and checks their Birth Date on a government mandated ID, and just arrest/shoot anyone over 125 (or whatever the age is) Police states are not a GOOD thing by any means and I am not recommending one. But the idea of “You won’t be able to control it.” just seems like a very odd thing to announce for any kind of Biological life extension technology. How are we talking about an unstoppable opponent in the same manner people think of an AI?
And on the breakthrough side, even if we literally developed a pill to “Cure all cancers for a dollar, side effect free.” That would be a STUNNING breakthrough in today’s research. But we would need an even bigger breakthrough to get to life extension effects to what you’re saying, or more likely, several breakthroughs in separate fields. Are we really anywhere near that?
I suppose to summarize my current beliefs, it is possible that lifespan will go up exponentially at some point, through a biological method, but I don’t see that happening yet, and I definitely don’t see it being unstoppable, and there are other technological events that I would expect to hit a crisis point far sooner.
Is there evidence that I’m not aware of that would make me change my thoughts on this?
it is possible that lifespan will go up exponentially at some point, through a biological method, but I don’t see that happening yet
If you are not that very old you only have to increase your expected lifespan faster than time progresses, that is just change the angle to >1 and you are out of the woods, so to speak. At the moment, my lifespan (based upon the population I belong to) increase with about 3 months per year, if it would increase, I would have a shot at reaching longevity escape velocity.
But we would need an even bigger breakthrough to get to life extension effects to what you’re saying, or more likely, several breakthroughs in separate fields. Are we really anywhere near that?
You are quite right, according to SENS there are seven categorise of “damage” that define aging:
From Wikipedia
cell loss or atrophy (without replacement)
oncogenic nuclear mutations and epimutations,
cell senescence (Death-resistant cells),
mitochondrial mutations,
Intracellular junk or junk inside cells (lysosomal aggregates),
extracellular junk or junk outside cells (extracellular aggregates),
random extracellular cross-linking.
If you look at every category independently the problem appears rather incremental, it’s not very hard for example to imagine that we will have livers made from scratch in the clinic in a decade or two.
The analogy of Life Extension to AGI seems weak to me. The problems of LE (if any should surface, which we don’t really know will happen) would be very slow to take place, and we would be able to watch them unfold and fix them as they happen. AGI is more like splitting the atom, in that it could easily reach critical mass and go beyond our capability to control in a matter of split seconds.
So no, the same level of urgent caution is not merited. I’m not saying there’s no way being careful can help us, but the mere act of conferring biological immortality through incremental advances in genetics is not something that should rationally make you worry in the same way that AGI should.
It’s really just a minor infrastructure change as far as humanity is concerned, one which makes reproduction less necessary, reduces the demand for (new) basic skills education, etc. -- it doesn’t reorganize the cosmos, muck around with our basic psychological make-up, or anything like that.
Heck I’d almost say being able to synthesize fresh meat and veggies from single cells rather than farming it is a comparatively bigger change for us to get used to… Eating is something we do every day, whereas aging is something we do once in our whole lifetime, at a slow and unnoticeable rate.
I’m rather glad you made this article, old sport, as it made me realize that I was treating this particular debate as an “arguments as soldiers” issue, wherein I must crush all opposing arguments and may never betray an ally. This was, of course, incorrect thinking, and so I’m glad I don’t think that way anymore.
But looking at all the comments so far, it doesn’t appear that there are any strong objections to life extension technology. There are the expected problems and the unexpected problems, but the same could easily be said of all sorts of tragedy removals that technological progress has brought us over this time. None of the ideas brought up in this post even attempt (so far as I can tell) to argue against at least doing research on life-extension, and, as others have brought up, this near-impossibility of existential risk from such technologies make the AGI analogy extremely weak.
So, good thing to bring up. But for the most part, we’re done here.
General Comments about the Article
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
This argument has been made a number of times. It takes the completely wrong view. The issue is foresight versus hindsight; The argument in the OP never says that only living to 100 is great and has amazing effects. Instead, the argument in the OP states that increasing life span could have devastating risks. Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
In other words, increasing life span from 100 to 800 could cause say over-population problems which could lead to societal or environmental collapse. Therefore it should be approached with caution. If however, you already have a society where the life span is 800 years, and society is functioning, then those risks are negated, and of course there would be no reason to kill people.
If however, life span was raised to 800 years, and it did cause over-population problems and they did threaten societal or environmental collapse, then yeah, I might advocate killing off some people in order to save the whole of society.
A more understandable version of foresight v. hindsight is that modern people, with hindsight, know it would’ve probably been better to not be so harsh on the Germans in the Treaty of Versailles after WWI. However the Allied Powers did not have access to knowledge about the consequences of their actions, they could not apply that unknown knowledge to their decisions.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1⁄8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn’t.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go “don’t worry about contraceptives—let’s just kill 7⁄8 of the humans on earth.”
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.
I was thinking about your post and these parts don’t sound convincing enough to me. You can make a Police State Society that stops every person and checks their Birth Date on a government mandated ID, and just arrest/shoot anyone over 125 (or whatever the age is) Police states are not a GOOD thing by any means and I am not recommending one. But the idea of “You won’t be able to control it.” just seems like a very odd thing to announce for any kind of Biological life extension technology. How are we talking about an unstoppable opponent in the same manner people think of an AI?
And on the breakthrough side, even if we literally developed a pill to “Cure all cancers for a dollar, side effect free.” That would be a STUNNING breakthrough in today’s research. But we would need an even bigger breakthrough to get to life extension effects to what you’re saying, or more likely, several breakthroughs in separate fields. Are we really anywhere near that?
I suppose to summarize my current beliefs, it is possible that lifespan will go up exponentially at some point, through a biological method, but I don’t see that happening yet, and I definitely don’t see it being unstoppable, and there are other technological events that I would expect to hit a crisis point far sooner.
Is there evidence that I’m not aware of that would make me change my thoughts on this?
If you are not that very old you only have to increase your expected lifespan faster than time progresses, that is just change the angle to >1 and you are out of the woods, so to speak. At the moment, my lifespan (based upon the population I belong to) increase with about 3 months per year, if it would increase, I would have a shot at reaching longevity escape velocity.
You are quite right, according to SENS there are seven categorise of “damage” that define aging:
From Wikipedia
If you look at every category independently the problem appears rather incremental, it’s not very hard for example to imagine that we will have livers made from scratch in the clinic in a decade or two.
The analogy of Life Extension to AGI seems weak to me. The problems of LE (if any should surface, which we don’t really know will happen) would be very slow to take place, and we would be able to watch them unfold and fix them as they happen. AGI is more like splitting the atom, in that it could easily reach critical mass and go beyond our capability to control in a matter of split seconds.
So no, the same level of urgent caution is not merited. I’m not saying there’s no way being careful can help us, but the mere act of conferring biological immortality through incremental advances in genetics is not something that should rationally make you worry in the same way that AGI should.
It’s really just a minor infrastructure change as far as humanity is concerned, one which makes reproduction less necessary, reduces the demand for (new) basic skills education, etc. -- it doesn’t reorganize the cosmos, muck around with our basic psychological make-up, or anything like that.
Heck I’d almost say being able to synthesize fresh meat and veggies from single cells rather than farming it is a comparatively bigger change for us to get used to… Eating is something we do every day, whereas aging is something we do once in our whole lifetime, at a slow and unnoticeable rate.
I’m rather glad you made this article, old sport, as it made me realize that I was treating this particular debate as an “arguments as soldiers” issue, wherein I must crush all opposing arguments and may never betray an ally. This was, of course, incorrect thinking, and so I’m glad I don’t think that way anymore.
But looking at all the comments so far, it doesn’t appear that there are any strong objections to life extension technology. There are the expected problems and the unexpected problems, but the same could easily be said of all sorts of tragedy removals that technological progress has brought us over this time. None of the ideas brought up in this post even attempt (so far as I can tell) to argue against at least doing research on life-extension, and, as others have brought up, this near-impossibility of existential risk from such technologies make the AGI analogy extremely weak.
So, good thing to bring up. But for the most part, we’re done here.