On “Friendly” Immortality
Personal Note: I would like to thank Normal Anomaly for beta-ing this for me and providing counter-arguments. It am asking him/her to comment below, so that everyone can give him/her karma for volunteering and helping me out. Even if you dislike the article, I think it’s awesome that they were willing to take time out of their day to help someone they’ve never met.
Imagine that you live in a world where everyone says “AI is a good idea. We need to pursue it.”
Sounds great!
But what if no one really thought that there was any reason to make sure the AI was friendly. That would be bad, right? You would probably think: “Hey, AI is a great goal and all, but before we start pursuing it and actually developing the technology, we need to make sure that it’s not going to blow up in our faces!”
That seems to me to be a rational response.
Yet it seems like most people are not applying the same thought processes to life-extending technology. This website in particular has a habit of using some variant of this argument: “Death is bad. Not dying is good. Therefore life-extending technologies are also good” However this is missing the same level of contemplation that has been given to AI. Like AI, there are considerations that must be made to ensure this technology is “friendly”.
Most transhumanists have heard many of these issues before, normally sandwiched inside of a “Death is Bad” conversation. However these important considerations are often hand-waved away, as the conversation tends to stick to the low-hanging fruit. Here, I present them all in one place, so we can tackle them together, and perhaps come up with some solutions:
Over-population: For example, doubling the life-span of humans would at the very least double the number of people on this planet. If we could double life-spans today, we would go from 7 billion to 14 billion people on Earth in 80 years, not counting regular population growth.
Although currently birthrates are falling, all birthrate information we have is for women being fertile for approximately 25 years. This has not changed much throughout history, so we cannot necessarily extrapolate the current birthrate to what it would be if women were fertile for 50 years instead.
In other words, not only will there be a population explosion due to people living longer, but I’d be willing to bet that if life-extension was available today, birth rates would also go up. Right now, people who like to have kids only have enough money and fertile years to raise on average 2-3 kids. If you doubled the time they would have to reproduce, you will likely double the amount of children that child-rearing families have.
For example, in modern society, by the time a woman’s children are out of the house and done with college, the woman is no longer young and/or fertile. Say for example you had a child when you were 25. By the time your children were 20 you would be 45, and therefore not at a comfortable age to have children. However, if 45 becomes a young/fertile age for women, families might likely decide to re-reproduce.
It’s one thing to say: “Well, we will develop technology to increase food yields and decrease fossil food consumption”, but are you positive we will have those technologies ready to go in time to save us?Social Stagnation: Have you ever tried having a long conversation with an elderly person, only to realize that they are bigots/homophobes/racists, etc? We all love Grandpa John and Grammy Sue, but they have to die for society to move forward. If there were 180 year-olds alive today, chances are pretty strong that a good amount of them would think that being anti-slavery is pretty progressive. They would have been about 90 years old when women got the right to vote.
We don’t so much change our minds, and we grow new people and the old ones die.Life sucks, but at least you die: The world is populated with people suffering with mental disorders like depression, social issues like unemployment, and physical deprivations like poverty and hunger.
It doesn’t make sense to extend life until we have made our lives worth extending.Unknown Implications: How will this change the way society works? How will it change how people live their lives? We can have some educated guesses, but we won’t know for sure what far-spread effects this would have.
I have a friend who is a professional magician and “psychic”, and about a month ago I convinced him to read HPMoR. After cursing me for ruining his sleep schedule for two days, we ended up having a discussion about some of the philosophies in there that we agreed and disagreed with. I was brand-new to LW. He had no prior knowledge of “rationality”, but like most of his profession was very analytically minded. I would like to share something he wrote:
We have a lot of ancient wisdom telling us that wishes are bad because we aren’t wise, and you’re saying… that if we could make ourselves wise, then we can have wishes and not have it blow up in our faces.See the shortest version of Alladin’s Tale:
Wish One: “I wish to be wise.”
The End.
Since… I am NOT mature, fully rational, and wise,
I really think I shouldn’t have wishes,
Of which, immortality is an obvious specific example.
Because I’m just not convinced
That I can predict the fallout.
I call this “The CEV of Immortality”, although at the time, neither of us had heard of the concept of CEV in the first place. The basic idea being that we are not currently prepared enough to even be experimenting with life-extending technologies. We don’t know where it will lead and how we will cope.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it’s developed there’s no going back. If the technology exists, people will stop at nothing to use it. You won’t be able to control it.
Just like AI, life-extending technologies are not inherently “bad”. But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it’s out of the box, it’s too late.
Counter-arguments
(Provided by Normal Anomaly)
Overpopulation Counter-argument: Birth rates are currently going down, and have fallen below replacement in much of the developed world (including the US). According to an article in The Economist last year, population will peak at about 10-11 billion in about 2050. This UN infographic appears to predict that fewer people will be born in 2020-2050 then were born in 1980-2010. I am skeptical that birth rate will increase with life extension. Space colonization is another way of coping with more people (again on a longer timescale than 40 years.) Finally, life extension will probably become available slowly, at first only a few extra years and only for the wealthy. This last also applies to “unknown implications.”
Social Stagnation Counter-argument: This leads to a slippery slope argument for killing elderly people; it’s very unlikely that our current lifespans are at exactly the right tradeoff between social progress and life. Banning elderly people from voting or holding office would be more humane for the same results. “Life sucks” Counter argument: This is only an argument for working on making life worth extending, or possibly an argument for life extension not having the best marginal return in world-improvement. Also, nobody who doesn’t want to live longer would have to, so life extension technology wouldn’t result in immortal depressed people.
These counter-arguments are very good points, but I do not think it is enough to guarantee a 100% “Friendly” transhumanism. I would love to see some discussions on them.
Like last time I posted, I am making some “root” comments. They are: General comments, Over-population, Social stagnation, Life sucks, Unknown consequences. Please put your comment under the root it belongs to, in order to help keep the threads organized. Thank you!
- 8 Jun 2012 21:28 UTC; 0 points) 's comment on Near Concerns About Life Extension by (
Upvote here if you liked my contributions. Thanks for posting them, Daenerys.
If we avoid life-extension, we’re already killing elderly people. It’s not a slippery slope. It’s the same thing.
A minor point: I dislike the practice of setting up “root” comments in the way you have. It makes sorting comments by karma score all but useless.
Strongly agree.
Sure, but it has spurred more comment than otherwise would be made, and more directed comment at that. Net benefit.
Depends on the topic.
Indirect implications (other than the actual life-saving) of biological life extension aren’t going to be that significant in the long term, because we don’t have that much time left in business-as-usual mode. Even rather conservatively, ignoring the whole intelligence explosion thing, WBEs are very likely going to be implemented in 150 years at the latest, which is a change with much greater practical impact.
WBE’s?
Whole brain emulations, or uploads (added to Jargon file).
Not true at all. People who really like children have much more children than that. The Amish are a clear example with their average of 5 or so children.
Yes but velocity is higher than many people are comfortable with. Two kids in diapers and three more in grade school at the same time is a lot more work than many people who like children are really up for.
One reason Amish have so many children is that they are needed for farm work—you actually get a return on investment with children in a low-tech farm.
Do all Amish farm?
In light of this, the economic gains don’t seem to be the key factor here, especially since the cost of land is rising, so its harder and harder to help sons start their own farms.
Doesn’t seem that relevant considering we’re discussing the pro’s and con’s of “immortality”. In any case my point was just that some people like children enough to want them at that rate.
If I can have one child every decade for eighty years, maybe I’d like to have eight children. As it is now, it would only be two.
Directory trees are so last century.
Social Stagnation Discussion Thread
Contrary to lavalamp, in a society without age I expect more murder, not less.
Inequality and ambition will become more and more socially problematic as lifespans extend. Imagine not just competing against Rockefeller’s descendants, but a 172-year old John D. Rockefeller (still sharp as a tack), patriarch of a clan of 150. And 150 assumes that they just extend lifespans, not reproductive windows! Compound interest will become a serious issue, in wealth, intelligence, and time.
Generational warfare is limited, now, because most people are patient enough to wait for their parents and bosses to die or retire. When that is no longer an option, things will get bloody- and maybe it will sometimes be a polite sort of “forced retirement,” but I suspect that’ll be more difficult to pull off than murder. You don’t have the frailty that comes with age to force an accident, and it’ll be difficult to accumulate enough allies to unseat them- they’ve been around longer. Would Nelson be content being forever just a grandson (especially if more second generation Rockefellers were being created)? Would John be content giving up the empire he created to watch his children or grandchildren mismanage it, when he feels just as hale and hearty as when he built it from scratch, and has the benefit of decades of experience?
As well, demographic trends that are vaguely worrying in the long run become short-run concerns. Suppose you had some violent, high birth-rate group that could become large enough to go around murdering other groups because they don’t share their bloodline in two centuries. Normally, that would be your great grandchildren’s problem, not yours, and it’s easy to imagine their two-hundred year plan falling apart. But with the originators around to see the thing to completion, and your head on the chopping block, you might be interested in nipping this in the bud (as once they get large enough, it will be difficult to kill enough of them to change their minds).
Arguments for disposing of “defectives” are also stronger, because once you have a high-quality person, they stick around. Why not populate the world with just high-quality people?
Overall, I think a world of thousand-year vampire clans will be a better one to live in (especially if I’m a first-generation vampire!), but I think society’s fractures will be deep and bloody.
This is Whig history. I am not confidant that we are morally superior to prior generations and that future generations will be morally superior to us. Making this argument would require a non-trivial amount of intellectual labor.
A number of people have issued some variant of this response, to which I reply:
We don’t have to say that society is getting better, only that it is changing. If society is not changing then it is stagnant. If society is changing, then those best suited to whatever society currently exists are those born relatively recently.
In other words, maybe the society of 2700 isn’t actually any better than the world of 2200, but it is different. If 2700 is not different than 2200, than society is no longer “evolving”. It is static. However, since 2700 hopefully will be different, the people best suited to live in it are those born in 2650, not those born in 2200.
tl;dr- We don’t have to say that society is getting better. We just have to say that it’s changing and those born most recently are best adapted to it.
There is a significant amount of inferential distance here.
and
I do not know what you mean by this. It is not obvious that, at this point in time, those individuals born relatively recently are better suited to current society than those individuals born less recently. However, I also am not sure what you mean by “better suited.” Your language sounds slightly Darwinistic. However, I do not think that you are talking about fitness.
Actually there is data on this. Why not crunch the numbers? Those born in 1941 are 70 today, but where 20 in 1961. Controlling for how getting older tends to make people more conservative, how do their opinions differ? Do you think people who don’t update on social norms, signalling and values are likley to die significantly more than others? If not I think you will find a surprising shift in individual opinions over the decades from what I recall of when I was last doing research on something related.
If I accept that “social progress” is going to slow down, but not stop, by some factor, this does not seem a grave tragedy, compared to the dis-utility of billions of deaths. Isn’t the whole reason we care about the speed “social progress” because we dislike the dis-utility incurred by those who wouldn’t be incurring it in a different (better?) system?
A sufficiently long life in the improved system should outweigh any difference in the amount of time spent in suboptimal conditions. To pick one of your charged examples and put a face to it, wouldn’t you say Alan Turing might have decided to stick around if he knew he would still be young and healthy in 2010 and society would be accepting of homosexuality?
Eh. Turing committed suicide after two years of chemical castration by hormone injections (which can significantly impact personality and cognitive functioning), I’m not sure he would have been willing to put up with 11 more years.
It’s also not clear to me that LGBT rights would have advanced as quickly in the UK without Turing as a martyr, and so he might not have been looking at only 11 more years.
I don’t think we know why he killed himself, but I’m not sure the prospect of social acceptance of homosexuality in a few decades would really have inspired him to stick around. In fact...he did have that prospect within reasonable expectations of his natural life. He had still lost his security clearance and probably considered his career to be over.
Turn things around: in a world where people lived forever, we would not solve the problem of social stagnation by killing people.
EDIT: Hm, I guess this is extremely similar to the provided counter-argument.
I don’t think it would be considered “stagnation” from the perspective of the old folks. They might see it as society retaining its sanity. From that point of view, this is not a cost, but a benefit.
In order to count it as a cost, you’d first have to assume that the values that future folks would hold, given no life extension, would be better than the values people would hold given life extension. What are the reasons to think this?
Its remarkable how many LWers fail to generalize the argument that Gandhi really dosen’t want to take a pill that makes him want to kill people.
Yes, because our neuroplasticity tends to decrease as we age. This is why it’s important to realize that anti-aging treatments must deal with the brain as well as with general bodily cell degradation. I think that many people in the field do realize this, though. (See also my previous comment on this subject.)
Cross-reference: this issue was discussed in Three Worlds Collide — the born-in-our-era Confessor says that he is too old to lead — that his generation was appalled by the decisions of the younger generation, but the by-then established social structures did not give them a say and rightly so.
Which hinges on “moral progress” currently happening or it being always desirable from our perspective.
A unexplored variant of the argument that I think works better would be one of the many examples of scientific revolutions marching on, one grave at a time. But obviously one could make a similar counter-argument that not all knowledge is good. Sometimes a better map gets you killed. But I would generally ignore that for now, since posters who question the high prior on the high instrumental value of a better map have a revealed preference of wanting a better map, considering the community they are a part of.
Edit: Missed this comment
Introspection works poorly here, as it often does. People are adept at changing their minds and forgetting they had ever thought differently.
A person with a good grasp of the outside view but who overly respected introspection might think: “I think I am right about things, but so does everybody. Furthermore, I haven’t significantly changed my mind from being wrong often, nor has anyone else. Society has been improving over time.
So long as I humbly and accurately recognize my not often changing my mind parallels others’ not often changing their minds, and it is unlikely I represent the pinnacle of moral progress throughout the ages (even assuming I’m better than my contemporaries) death is needed to replace people and maintain moral progress.”
The problem with that reasoning is that it is not true that people don’t change their minds—they mostly just think they don’t. They can honestly say they have always supported chief Grok, even when chief Urk was in charge.
But young people are more likely to support same-sex marriage.
Perhaps I worded it too strongly, though I don’t think so, but I didn’t mean to imply people never hold on to opinions established in their early life. I agree that on this issue (more than any other) generational churn is the biggest factor, but even here it is merely the largest one (I think).
“not a single state shows support for gay marriage greater than 35% amongst those 64 and older”—what were the opinions among this cohort years ago? When they were younger, did even half as many support gay marriage?
Also we do not know for sure that it is their age itself that creates more inflexibility of opinion. Many people in this age group are retired and consequently do not have as many social interactions outside of their in-groups.
I’m not sure what is meant by “inflexibility of opinion,” but for many possible concepts that would apply much more to the young than the old, and perhaps to the middle aged least of all.
.
IF immortality hit “today”, roughly a quarter of the US population would be scientifically literate, and would remain so—likely—the remainder of their lives, based on the age ranges that spread holds true of today. That’s rather disturbing, to me.
.
This seems like a real problem, but, well, I don’t know of any good way to deal with it. I don’t think that it’s a coincidence that the Brown vs. Board of Education ruling didn’t happen until after the Confederate war veterans were all dead.
As I once said some time ago: What kinds of things which are currently considered signs of social progress are also things that an average educated person from the U.S. of the 1800s would find horrifying?
And of those, which of them should we still find horrifying today?
You probably should read a little more history. Things were better for most blacks in 1910 than they were in 1940. Woodrow Wilson’s election, and his subsequent expulsion of blacks from the civil service, started a period of decline for blacks that was deepened by the Depression, but had strongly started improving after WW II. Brown vs Board of Education was a result of the change not a cause, and “Civil War” veterans were irrelevant.
Thomas Sowell has written on race problems, notably in Economics and Politics of Race and large parts of his memoir, A Personal Odyssey. His discussion of Brown in the latter is particularly interesting, he was attending Howard University at the time, though a lot of comments about it and its effects are spread throughout the book.
I’m not particularly surprised by this, actually… things also got worse after 1877 when federal troops left the South and paramilitary organizations started suppressing the black vote.
I just meant it as a milestone that showed how things had changed, not as a specific cause of that change. The ruling would never have been made without the successful execution of a long-term strategy to gradually change legal opinions. (The first schools to be integrated by the courts were state-run law schools...)
In this example, though, we’re the Confederate War Veterans—it seems rational, given our preferences, to crush the dreams of the future. (Or to alternate to an example with different affect: it’s rational for, say, an idealistic young lawyer to take precautions to ensure she doesn’t become an old scumbag lawyer, even if she know the old scumbag would be very happy those precautions had not been taken.)
Of course, perhaps Jim Crow was not the true implementation of Confederate CEV; they would have preferred different things if they had been more informed about the world and more the people they wanted to be, and so on. But in that case Confederates should expect that their future selves should more faithfully execute the real CEV as they become more informed, unless they don’t trust themselves (suppose I like fairness but like myself being on top slightly more, and that at after a certain point future-me is sufficiently unlike me that I’d prefer he not be on top (even though he will, since he will my preference set) - in that case it’s rational for me to precommit to stepping down after a certain time.)
So upon reflection this is a valid concern to the extent that 1) we don’t trust ourselves to implement our current CEV better than our successors and 2) we don’t trust our precommitment mechanism to work either. Of course “our” CEVs probably vary enough that there’s not a single useful answer here.
Indeed, one recurring problem is that we are who we are and not who we want to be. It’s easier to get Our Hypothetical Racist Grandparents to agree that the premises of racism are wrong but it will be harder to get them not to be upset by at the thought of their granddaughter marrying one, even if they know they shouldn’t be and actually do want to change.
Maybe someday we’ll invent brainwashing that works?
General Comments about the Article
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying “hey, I have a great idea, why don’t we kill everyone off on their 100th birthday! Think of all these great effects it would have!” Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
This argument has been made a number of times. It takes the completely wrong view. The issue is foresight versus hindsight; The argument in the OP never says that only living to 100 is great and has amazing effects. Instead, the argument in the OP states that increasing life span could have devastating risks. Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
In other words, increasing life span from 100 to 800 could cause say over-population problems which could lead to societal or environmental collapse. Therefore it should be approached with caution. If however, you already have a society where the life span is 800 years, and society is functioning, then those risks are negated, and of course there would be no reason to kill people.
If however, life span was raised to 800 years, and it did cause over-population problems and they did threaten societal or environmental collapse, then yeah, I might advocate killing off some people in order to save the whole of society.
A more understandable version of foresight v. hindsight is that modern people, with hindsight, know it would’ve probably been better to not be so harsh on the Germans in the Treaty of Versailles after WWI. However the Allied Powers did not have access to knowledge about the consequences of their actions, they could not apply that unknown knowledge to their decisions.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1⁄8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn’t.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go “don’t worry about contraceptives—let’s just kill 7⁄8 of the humans on earth.”
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.
I was thinking about your post and these parts don’t sound convincing enough to me. You can make a Police State Society that stops every person and checks their Birth Date on a government mandated ID, and just arrest/shoot anyone over 125 (or whatever the age is) Police states are not a GOOD thing by any means and I am not recommending one. But the idea of “You won’t be able to control it.” just seems like a very odd thing to announce for any kind of Biological life extension technology. How are we talking about an unstoppable opponent in the same manner people think of an AI?
And on the breakthrough side, even if we literally developed a pill to “Cure all cancers for a dollar, side effect free.” That would be a STUNNING breakthrough in today’s research. But we would need an even bigger breakthrough to get to life extension effects to what you’re saying, or more likely, several breakthroughs in separate fields. Are we really anywhere near that?
I suppose to summarize my current beliefs, it is possible that lifespan will go up exponentially at some point, through a biological method, but I don’t see that happening yet, and I definitely don’t see it being unstoppable, and there are other technological events that I would expect to hit a crisis point far sooner.
Is there evidence that I’m not aware of that would make me change my thoughts on this?
If you are not that very old you only have to increase your expected lifespan faster than time progresses, that is just change the angle to >1 and you are out of the woods, so to speak. At the moment, my lifespan (based upon the population I belong to) increase with about 3 months per year, if it would increase, I would have a shot at reaching longevity escape velocity.
You are quite right, according to SENS there are seven categorise of “damage” that define aging:
From Wikipedia
If you look at every category independently the problem appears rather incremental, it’s not very hard for example to imagine that we will have livers made from scratch in the clinic in a decade or two.
The analogy of Life Extension to AGI seems weak to me. The problems of LE (if any should surface, which we don’t really know will happen) would be very slow to take place, and we would be able to watch them unfold and fix them as they happen. AGI is more like splitting the atom, in that it could easily reach critical mass and go beyond our capability to control in a matter of split seconds.
So no, the same level of urgent caution is not merited. I’m not saying there’s no way being careful can help us, but the mere act of conferring biological immortality through incremental advances in genetics is not something that should rationally make you worry in the same way that AGI should.
It’s really just a minor infrastructure change as far as humanity is concerned, one which makes reproduction less necessary, reduces the demand for (new) basic skills education, etc. -- it doesn’t reorganize the cosmos, muck around with our basic psychological make-up, or anything like that.
Heck I’d almost say being able to synthesize fresh meat and veggies from single cells rather than farming it is a comparatively bigger change for us to get used to… Eating is something we do every day, whereas aging is something we do once in our whole lifetime, at a slow and unnoticeable rate.
I’m rather glad you made this article, old sport, as it made me realize that I was treating this particular debate as an “arguments as soldiers” issue, wherein I must crush all opposing arguments and may never betray an ally. This was, of course, incorrect thinking, and so I’m glad I don’t think that way anymore.
But looking at all the comments so far, it doesn’t appear that there are any strong objections to life extension technology. There are the expected problems and the unexpected problems, but the same could easily be said of all sorts of tragedy removals that technological progress has brought us over this time. None of the ideas brought up in this post even attempt (so far as I can tell) to argue against at least doing research on life-extension, and, as others have brought up, this near-impossibility of existential risk from such technologies make the AGI analogy extremely weak.
So, good thing to bring up. But for the most part, we’re done here.
Over-Population Discussion Thread
Space colonization isn’t really all that necessary, if we’re talking about human habitation. On the other hand, if we’re talking about space industrialization—constructing industrial/economic materials/objects in space—then that definitely opens the floor to sustaining significantly larger populations here on earth, with just a touch of additional materials-science “shenanigans”.
Orbital solar-thermal power plants (if constructed offworld with bootstrapped lunar industry) could push humanity to post-Kardashev-Type-I energy consumption. This reduces energy costs as a primary economic concern.
The adoption of skyscraper farming techniques would reduce vast swathes of human ecological impact (in addition to the offset of energy production offered by point #1.) It would also significantly reduce current human land-use by area. These two things in turn would allow for greater human population without apparent depreciation in available square-footage per person.
The adoption of higher-strength materials for construction (CNTs for example) could permit the development of ‘megastructures’ terrestrially. This, in turn, would allow for multi-level urban environments. Imagine a single building the size of the NYC metropolitan area that was half a mile high. Even if it was primarily open-air, and dedicated 25% of its area to “parkland”, that would still represent as much as a ten-fold increase in “habitatation area” using generous amounts of square-footage per person. (I’m heavily ‘ballparking’ numbers here.)
-- In case it isn’t entirely obvious; the scenario I just described would allow for the increase of human populations by approximately 20x our current numbers, all while reducing the total landmass utilized and our ecological footprint. (Industry, energy consumption, and agriculture would all cease having observable ecological footprints.)
While clinical immortality might decrease the TFR necessary for population replacement, it certainly wouldn’t reduce it to zero. (Accidents, plain and simply speaking, happen. As do intentional deaths.) The question then follows—what rate of TFR would such a culture described operate at?
John McCarthy argued that overpopulation won’t be much of a problem as part of his larger analysis on the sustainability of progress.
Human material progress is sustainable?!? Has anyone told Robin Hanson?
Human material progress is sustainable for very large values of material progress.
The page says: “there are no apparent obstacles even to billion year sustainability”. It seems rather strange to have a whole page devoted to the hypothesis that human material progress is sustainable—apparently without acknowledging that progress might result in the end of the human race—or that overall progress is very likely to eventually slow down to minimal levels. Instead we have a page on menaces—which offers an inadequate treatment of the topic.
I’ve never understood how severe food yield issues from overpopulation are supposed to come about. If the population is increasing far faster than we can increase the food yields, wouldn’t the price of food massively increase and stop people from being able to afford to have children? Is the idea that the worldwide agricultural system would be gradually overtaxed and then collapse within a short period? If not, what were all the people eating the day before catastrophic overpopulation is declared?
Things like that are not unprecedented. I think that is the theory for what became of the Easter Island civilization. One could also draw parallels to the collapse of sardine fishing in US in the 1950s—in a couple of years the sardine population completely crashed, but up until that point the fishing had been going great, there was no gradual cost increase that made it less profitable.
One thing that should probably be noted is that doubling life span wouldn’t necessarily double the number of years a woman is fertile.
That is true, but life-extending tech/transhumanism doesn’t tend to focus on making people be old for a longer period of time, but on lengthening our “peak” years.
This is actually one of the arguments that proponents of life-extending technologies use. Nay-sayers will say: “Well, who wants to be 150 years old? You’d be sick and wrinkly and falling apart, etc?” The general come-back is that the whole point of the tech is to not make you be old until you were, say 130 years old.
I sure as Hell wouldn’t expect life-extension through rejuvenation to actually restore eggs in females. They do run out eventually.
Whether or not Logos01 below is correct, the finite number of eggs women have is still vastly greater than the number of children most women have.
Hm, you are right. If I had remembered my basic biology, I would’ve remembered that women are already born with a finite amount of eggs.
Weak rebuttal: Another reason people don’t have children in their 50s-70s is that chasing kids around is tiring. If women knew that they would at least still be young physically at those ages, they might think ahead and have some eggs frozen.
Isn’t that currently in doubt? I recall that in other mammals marrow stem cells have been shown to differentiate into follicles, but no studies have shown this to occur in humans yet.
Or have eggs generated through biomedicine practices of the age. We’re pretty close to differentiating sperm cells already, eggs can’t be that far off.
“Life sucks” Discussion Thread
As utterly basic as this response is, it must be made:
as much as life sucks now, I cannot expect it to suck for the entirety of the next thousand years. Therefore, I will attempt to live those thousand years. If life still sucks, then I cannot expect that it will definitely suck for the next ten thousand years. Therefore, I will attempt to live those ten thousand years.
In other words: I’d like to be around when life stops sucking, thank you very much.
That is why I am still alive right now. The suck has to go away some time, right. Right?
We live in an uncaring universe, do the math.
But don’t kill yourself because that will make life worse for me and I can’t kill myself because I have plenty of people who refuse to kill themselves who care about me!
Relevant Onion article. (Part of the reason I’m still alive.)
The universe contains caring people, and various mechanisms they have created.
And what are our best rationally cleaned up estimates for how long those caring people and their mechanisms are likley to stick around?
Well, sympathy (at least for ingroup members) is a human universal, so at least until we start with the brain modifications. And then unless we’re in a horrific dystopia we can remove a bunch of sources of suck.
Some mechanisms are historical accidents (say, the dole and suicide hotlines), but things like civilisations, economies, medical systems, and technological progress look unlikely to go away unless we all do.
That’s what I was aiming for with my previous comment. An actuarial table for our civilization given the best rationalist estimates we have is a depressing sight.
Then we don’t need to kill ourselves, that’s taken care of for us! (Note: previous sentence neglects the cost of suck spent waiting for the apocalypse.)
Basically. :)
Pascal’s wager with the future.
I’m not sure I understand the practical impact of this objection. Right now, massive life extension (routine 100+ yr lifespans) and solving serious permanent medical conditions (regrowing lost limbs, curing Down syndrome) are both significantly beyond our capacity. Any research into either topic easily qualifies as basic research and I would predict that increasing our knowledge towards curing permanent medical conditions would be useful in life extension, and vice versa. Put slightly differently, how would you expect our basic research funding to change if we abandoned research into massive life extension?
And even if Down Syndrome occurs in the same frequency after 1000 yr lifespans are common, isn’t that an improvement for most people?
Can’t see why you should mercy-kill people in a hundred years but shouldn’t mercy-kill them tomorrow.
Sometimes I think destroying the world sounds like a pretty good idea.
Please don’t destroy the world. I’m still using it.
But all my stuff is there.
A similar idea was discussed several months ago, in the post: On the unpopularity of cryonics: life sucks, but at least then you die.
Yes, that’s where I got the phrasing from. I chose that particular wording because I figured LW-ers would recognize it. Slightly different application here, though.
Okay, cool. I did not see an explicit link, and I figured that some may benefit from looking over the previous discussion.
Rebuttal of the rebuttal:
Suicide is already effectively illegal. Many easy, painless methods are outlawed or not given to the mentally ill (drugs, guns, assisted suicide). Suicide-attempters are force-treated. There is tremendous social pressure to not commit suicide, including inflicted guilt. (For detailed arguments, just read Sister Y’s linked blog.)
Personally, I’m not interested in life-extension until life is actually worth living. Several problems (like the harsh, maybe-even-negative-sum social hierarchy) seem unfixable without a major re-engineering of humanity, so I don’t expect it to happen anytime soon, if ever.
Having said that, I don’t know how bad life in general is. Maybe some people actually have lives worth living. They can extend their lives if they want. I’m not interested in arguing other people into pessimism, and I have no reference point to understand their preference anyway.
However, I think it’s credible that 30% or more don’t have worthwhile lives. Having more people alive for longer seems like it will only bring back the Malthusian era much faster.
Even if suicide is discouraged, no-one is likely to compel unhappy people to extend their lives. Most people think life extension is immoral; they won’t object to anyone turning it down.
Feeding tubes are life extension technology and we force those on people all the time. It ends up being really hard to enforce battery causes of action against forced medical care when you’d die without the intervention.
Assuming it is necessary, is there some particular reason we should think this major re-engineering of humanity is unfeasible?
Assuming away any externalities, why would “life sucks” (i.e., life is worse than average/ideal) be a reason to think life is worse than death?
Edit: I see now that my objection is the same as Normal Anomaly’s, at the end of the OP.
Because life with negative utility may be more common than we think (pdf warning.) Longer lifespans could even make this more pervasive if Grognor’s reasoning upthread is common.
This is a problem we can fix with cryonics. Actually, hypothermic hibernation tech, or even a plain old anesthetic coma should suffice. There’s no reason anyone should be forced to stay awake in intense psychological pain while they await a cure.
(1) I can’t do PDFs, unfortunately, so could you explain what you mean here? (2) I think Grognor is talking about people’s expectations of future utility. If these are positive enough, then it makes sense to endure present hardship (because by enduring it, the agent creates larger benefits on net—just in the future, that’s all). That is, as long as the future will be bright enough, people shouldn’t choose to die now. Such lives therefore suck, but less than death, right? And if such better-than-death lives are common, then that hardly supports “life sucks worse than death.”
Unknown Consequences Discussion Thread
I’m going to bring up the classic Aubrey de Grey response, which actually works for all of these issues, but I think this one in particular. Yes, there will be problems if we live a much longer time. Yes, they will be very big problems. No, we don’t even know what those problems will even be. But those problems will pale in comparison to the needless deaths of hundreds of thousands of people every day.
Yup, there’s ginormous status quo bias going on here. If we lived in a world stretched to the limit for resources, where policy is impossible because everyone is a bigoted moral fossil and the few sane leaders left have no clue what they’re doing because it’s all so new, and you proposed “Hey, I know! Let’s kill everyone over 80!”… everyone would just stare and ask “Have you been reading Pebble in the sky again?”.
.
How is this hand-waving?
.
It just gets to be very hard to envision a situation in which people would be motivated to keep forcing life onwards if it stopped being worth it, as a rule.
Yes, euthanasia exists, and is presently frequently denied. If the desire for it became so widespread that it was a major social need in the general population, that denial of it would also change, so long as we have anything to do with making our own rules.
And if most people were even modestly rational...
It seems to me that if you would be better off dead, that’s the kind of situation you notice.
If the consequences of life extension really are unknown, then they might be really good just as easily as they could be bad. Giving those up has a high opportunity cost. Without more information about the nature of these “unknown” consequences, they are not relevant (for non-risk-averse utility maximizers).
.