Yet it seems like most people are not applying the same thought processes to life-extending technology.
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
This argument has been made a number of times. It takes the completely wrong view. The issue is foresight versus hindsight; The argument in the OP never says that only living to 100 is great and has amazing effects. Instead, the argument in the OP states that increasing life span could have devastating risks. Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
In other words, increasing life span from 100 to 800 could cause say over-population problems which could lead to societal or environmental collapse. Therefore it should be approached with caution. If however, you already have a society where the life span is 800 years, and society is functioning, then those risks are negated, and of course there would be no reason to kill people.
If however, life span was raised to 800 years, and it did cause over-population problems and they did threaten societal or environmental collapse, then yeah, I might advocate killing off some people in order to save the whole of society.
A more understandable version of foresight v. hindsight is that modern people, with hindsight, know it would’ve probably been better to not be so harsh on the Germans in the Treaty of Versailles after WWI. However the Allied Powers did not have access to knowledge about the consequences of their actions, they could not apply that unknown knowledge to their decisions.
Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1⁄8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn’t.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go “don’t worry about contraceptives—let’s just kill 7⁄8 of the humans on earth.”
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
This argument has been made a number of times. It takes the completely wrong view. The issue is foresight versus hindsight; The argument in the OP never says that only living to 100 is great and has amazing effects. Instead, the argument in the OP states that increasing life span could have devastating risks. Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
In other words, increasing life span from 100 to 800 could cause say over-population problems which could lead to societal or environmental collapse. Therefore it should be approached with caution. If however, you already have a society where the life span is 800 years, and society is functioning, then those risks are negated, and of course there would be no reason to kill people.
If however, life span was raised to 800 years, and it did cause over-population problems and they did threaten societal or environmental collapse, then yeah, I might advocate killing off some people in order to save the whole of society.
A more understandable version of foresight v. hindsight is that modern people, with hindsight, know it would’ve probably been better to not be so harsh on the Germans in the Treaty of Versailles after WWI. However the Allied Powers did not have access to knowledge about the consequences of their actions, they could not apply that unknown knowledge to their decisions.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1⁄8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn’t.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go “don’t worry about contraceptives—let’s just kill 7⁄8 of the humans on earth.”
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.