The best option is to embrace permanent death. The success of cryogenics or other life preserving technologies would be disastrous for humanity. Already population estimates for the near future cause panic about resources and sustainability, without further population increase due to a decrease in death rate. It would only be feasible if the birth rate was cut severely by imposing policies such as that in China. As procreation and cultivation of family life are seen as integral parts of the human experience these policies would probably be unsuccessful. Even if they were successful would this really be desirable? We should accept the transient, cyclical nature of life on this planet.
That’s the “relinquish” option. The human race would not adjust sensibly to the new situation, so better to just not go there. But the argument that humanity can’t restrain itself or coordinate its actions can be applied from the other side: The human race will never relinquish life extension, too many people want to live, so we just have to make it work.
If humanity had the self-confidence and self-respect to seek immortality en masse, then we really would try to solve the other problems. The social contract would be, rejuvenation only if you have no more children. Outer space would be the safety valve. We’d fix a new average lifespan and organize to achieve equilibrium and sustainability around thousand-year lifespans rather than hundred-year lifespans. Something would be done.
But only a handful of people have that drive. So whether technological life extension comes to pass, will not be decided by humanity as a whole deliberately tilting in one direction or another. I think it’s predestined that it will come to pass, because the desire to live is stronger than the willingness to die as a moral example. If you just look at the roles that those two impulses play in human psychology, there’s no comparison, the will to live is much much stronger. So, though the human race doesn’t have enough will-to-live to be actively prioritizing longevity, it does have enough will-to-live to not outlaw it either. Only a traumatically overpopulated society would associate extended lifespan with suffering strongly enough to outlaw it.
The larger context here is that the human race is decoding the secrets of nature—the gene and the brain—and knowledge is power. Historically it’s a short step from science to technology. We are crossing those thresholds, from knowledge to power, just as surely as we are crossing all those environmental and population thresholds. And radical life extension, the ability to keep a human being in a state of youth and health for unnaturally long periods, is almost just an incidental consequence of this. You need to contemplate something like the full range of lifeforms on Earth, and then add science-fiction ideas about robots, smart machines, artificial life, and artificial intelligences, and then imagine that you could become any one of those lifeforms, while having all those machines working for you … to get an inkling of what that power should make possible.
That power will come into the possession of a minority. It may be a minority numbering in the millions, a globally distributed technology-using class which then becomes the de-facto posthuman technocracy of Earth. And it would then be the cultural politics and other internal politics of that technocracy which decides what becomes of the rest of us. They might keep Earth as a nature preserve for old-style humans, policed to ensure that we don’t independently reinvent the technologies of power. They might create a social order which permits natural humans to join the techno-aristocracy, but only after they have shown a readiness to be responsible, and a psychological capacity to be responsible. Such a transition might even require neurological intervention, to ensure the presence in the new posthuman of whatever norms are deemed essential within the technocracy.
Depending on how concentrated power becomes, there is the potential for far more coercion. At one extreme, ultimate power is wielded by “unfriendly” artificial intelligences that care nothing for humanity and who just pave the earth with a new machine ecology. Or the summit of the technocracy might be a faction who use humanity for their own amusement, or a cult who have some highly specific posthuman ideal to which we will all be made to conform.
A humane transhuman order does seem to be a possibility, but there’s no getting around the fact that it would have to contain elites possessing powers even greater than the national-security elites of today’s superpowers, with their nuclear weapons and global surveillance networks. In particular, a neurocratic elite for whom human nature is transparent and manipulable is implied. The neurocrats might dominate, or they might just be one segment of the technocracy. But they would be extremely important, because their tools and knowledge would be responsible for perpetuating the values that give their civilization its stability and its defining tendencies.
So Laurie, I think that’s the big picture. The future likely contains some combination of artificial intelligence and transhuman neurotechnocracy, that will regulate the bounds and the directions of posthuman civilization. How stable that will be, whether there will be competing power centers, how the story of that civilization could unfold (because it too would have a life, career, and death, it wouldn’t be a timelessly stable order), are all highly contingent on what happens in our immediate future. At least you found your way to one of the places that might make a difference; LW could become a sort of finishing school for future AI programmers and neurocrats. I don’t think you’ll convince them to relinquish their future power, that just means that someone else will seize the throne. But maybe you can shape their thinking. Who knows, with time you might even join them.
I don’t know what the reproduction rate would look like if there were no aging. It’s very hard to tell what proportion of people like raising children, or whether people would want to keep raising children after the first few, or how much they’d want to keep raising children if they won’t need anyone to take care of them as they get old. Or for that matter, whether raising children would be so much easier with future tech that the prospect would be more attractive for more people.
I didn’t say that investigation into the possibilities of life lengthening should be prevented or inhibited. One of our most primal urges is to survive at any cost. Obviously human curiosity for what is possible will lead us into these areas of research. However I think that if these methods proved viable to the extent you suggest (1000 year life spans) this would ultimately be ruinous for the human race. I think you agree here as the situation you describe as a likely outcome is by no means desirable:
That power will come into the possession of a minority. It may be a minority numbering in the millions, a globally distributed technology-using class which then becomes the de-facto posthuman technocracy of Earth. And it would then be the cultural politics and other internal politics of that technocracy which decides what becomes of the rest of us. They might keep Earth as a nature preserve for old-style humans, policed to ensure that we don’t independently reinvent the technologies of power. They might create a social order which permits natural humans to join the techno-aristocracy, but only after they have shown a readiness to be responsible, and a psychological capacity to be responsible.
In the definition of the “relinquish” option it argues that the prevention of certain technological investigations is characteristic of a totalitarian society. While I agree with that I have to say that the society you describe sounds worse. So maybe if what you describe was really a likely outcome we should think about the value of restrictive measures now for preventing much bigger infringements of rights in the future. The prospects you describe as inevitable are deeply depressing: an elite group of humans enforcing the complete subjugation of the rest of the planet.
You also dismiss the problem of over population, lack of resources, and colonisation of space despite the fact that these issues are no where near being solved now. If it was possible to colonise space on a massive scale this again I feel would be completely undesirable. Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
Essentially I think the desire for a prolonged life span is born out of fear of death, eternity and uncertainty. This is a natural human reaction to death but in the end even if we could push death further and further away we would still have to face it at some point. Most people don’t need an extra 900 years of life. I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
How?By “most people” I’m implying no one I’m not suggesting there are some people who deserve extra life and some who don’t. Sorry if that isn’t clear where I’m from that would be implicitly understood.
My point was that you’re thinking that you know enough to decide that for everyone. You’re also bringing in a concept of need (need for what?) when I think the relevant question is whether people want to live longer.
And I’m including myself in that no one too obviously! I don’t know whether I need to make that explicit but I just realised you said selfish as well as elitist.
Ok I don’t think I would use the word selfish in that context- arrogant or misguided maybe- so I wasn’t sure what you meant. Anyway a lot of people when they are nearing the end of life say that they are happy to accept death, many very old people even wish for death. Obviously people have regrets but I think you would have these regardless of life span. I’m not disputing the fact that many people do wish for immortality or massively increased life span.
I brought up the question of need because of the possible negative impact that the invention of immortality could have on the human race and the rest of the world. If it is just a question of desire then the possible consequences have to be more closely scrutinised than if it actually served a purpose.
I think you agree here as the situation you describe as a likely outcome is by no means desirable … an elite group of humans enforcing the complete subjugation of the rest of the planet.
It’s not subjugation if all they do is prevent independent outbreaks of destructive technology.
I can see that I described four forms of posthuman technocracy, that we could call wilderness, meritocracy, exploiter, and cult. (Don’t hesitate to employ better names if you think of them.) Wilderness and meritocracy are the ones that you quoted and I can’t view them as bad, in fact they could be positively utopian. We’re talking about a world which has solved the crisis created by technology, by arriving at a social and cultural form that can employ technology’s enormous potential constructively.
Such a world has to ensure that the crisis isn’t recreated by independent reinvention, and it has to regulate its own access to techno-power. The “policing” is for the first part, and “entry exams for the techno-aristocracy” is for the second part. Do you find the prospect of even that much structure depressing, or is it just that you think the dystopian possibilities would inevitably come true?
You also dismiss the problem of over population, lack of resources, and colonisation of space … Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
We should distinguish between any exacerbation of existing problems due to longevity (your first list), and the problems that already exist and would continue to exist even without extra longevity (your second list). Just so we have names for things, I’ll call the second type of problem a “sustainability problem”; a technological solution to a sustainability problem, a “sustainability technology”; the first type of problem, an “exacerbation problem”; and a technology implicated in an exacerbation problem, a “destabilizing technology”. So we can call, for example, solar cells a sustainability technology, and stem cells a destabilizing technology. (These are precisely the sort of ultra-broad categories that can lead you astray, because under certain circumstances, solar cells might be destabilizing and stem cells might be sustainability-enhancing. But I felt the need for some extra vocabulary.)
So I summarize this part of your position as follows: we should prioritize the solution of sustainability problems, and avoid exacerbation problems by staying away from destabilizing technologies.
There’s a large number of sub-issues we could explore here, but I see the central fact as the failure of relinquishment as anything more than a local tactic that buys time. Someone somewhere will develop the destabilizing technologies. Independent development can’t be prevented or “policed” except by someone already possessing similar powers. So ultimately, someone somewhere must learn how to live with personally having access to destabilizing technologies, even if these powers are locked away most of the time and only brought out in a crisis. The various “posthuman technocracies” that I sketched are speculations about social forms that don’t keep advanced technology completely sealed away, but which nonetheless have some sort of resilience.
I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
Well, the moral and existential dimension of these discussions is hotly contested. I focused on politics and pragmatism above, but I’ll try to return to the other topics in the next round.
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)
That is a rather unpopular view here. The common position is one along the lines of “People should be able to live for as long as they wish.” And the response to concerns of overpopulation is often “Let’s hurry up and get ready to colonize space.” Or “Computer emulations would be a lot cheaper.” Or something along those lines.
Basically, just because immortality would be problematic in some respects, it doesn’t mean that we have to consider the systematic ending of human life to be an acceptable state of affairs, and it doesn’t mean we shouldn’t look for solutions to the death and overpopulation problem as a whole.
Isn’t good to have a multiplicity of viewpoints presented?
The common position is one along the lines of “People should be able to live for as long as they wish.” And the response to concerns of overpopulation is often “Let’s hurry up and get ready to colonize space.
I find this point of view unsurprising as it reflects the greed and selfishness instilled in the population of the over indulgent western world today. Everything should be for sale, we should be able to have whatever we want, even though the consequences for the rest of humanity, the other species on this planet and the environment could be devastating. Death isn’t a problem to be overcome it’s the natural conclusion to life.
“I find this point of view unsurprising as it reflects the greed and selfishness instilled in the population of the over indulgent western world today.”
“Selfish” is more typically attached to those people who are okay with other people dying; not the people who are not okay with it.
Everything should be for sale, we should be able to have whatever we want, even though the consequences for the rest of humanity
I don’t know of any person here who wants immortality only for themselves and says to hell with everyone else. I suggest you actually read up on the actual views of the people in this community rather than strawmanning and making caricatures out of them.
Death isn’t a problem to be overcome it’s the natural conclusion to life.
You’re imbuing a moral quality to the word “natural” which isn’t actually there. By the same argument earthquakes and tsunamis are also not problems to be overcome, they’re the natural result of tectonic movement.
Nor is starvation a problem to be overcome, it’s the natural conclusion to the lack of sufficient food. Nor is infant mortality a problem to be overcome—after all women can make babies once every nine months, so it’s natural for so many infants to perish.
All problems to be overcome are “natural”. Until they’re solved, at which point they’re no longer natural, their solution is.
Selfish because if everyone on this planet chose to be immortal and continued to reproduce life on earth would be unsustainable unless major innovations in relation to problems such as the ones mentioned previously were realised. Even if they were the quality of life would inevitably be lower and eventually the human race would die out if the birth rate exceeded the death rate by such massive numbers.
If reproduction was stopped it may be feasible but this would probably have to be controlled and enforced which I don’t agree with due to the reasons stated earlier.
I suggest you actually read up on the actual views of the people in this community rather than strawmanning and making caricatures out of them.
I wasn’t actually addressing everyone in this community I was replying to a comment from one individual.
You’re imbuing a moral quality to the word “natural” which isn’t actually there.
Maybe you’re right about that, although you’re stretching it a bit with the infant mortality argument. And I don’t think tsunamis are problems to be overcome I think we can only deal with the consequences. Starvation in many places is a problem due to over population which would only be exacerbated if people stopped dying. And I do think that quest for immortality is ultimately a selfish goal. When considering immortality do you honestly think “it would be great if i could live forever” or “it would be great if me and everyone on the planet now could live forever”? Maybe you do think the latter but I think if immortality was discovered tomorrow it would be concentrated in the hands of a rich elite who judge their lives to be more important than the rest of ours. After a while it would be sold to those wealthy enough to afford it. Just another way for the wealthy elitists to conserve their power. These people would control the world through the generations and decide their own world order. This would help to cause the stagnation of political ideas, social change and innovation.
Maybe you’re right about that, although you’re stretching it a bit with the infant mortality argument.
Really? It was the chief method of population-control once upon a time, much like death by aging is now. They seem pretty analogous in most ways.
People were “selfish” back then to not want their infant babies die. People are the same sort of selfish now to not want to see their parents die.
Lack of death in both cases causes the same sorts of problems, but people adjust to problems. Fertility declined after infant mortality dropped—fertility per year will also be declined in people have their youthful years extended indefinitely.
Maybe you do think the latter but I think if immortality was discovered tomorrow it would be concentrated in the hands of a rich elite who judge their lives to be more important than the rest of ours. After a while it would be sold to those wealthy enough to afford it.
Do you understand that you just made two contradictory arguments—before you said there will be overpopulation, because it will be given to all. Now you say it will be given only to some (so there’s no problem of overpopulation), but these few will create an elite.
Those are two opposite problems—which one do you believe will be the actual case?
These people would control the world through the generations and decide their own world order.
How does that follow? In what way does medical immortality give these people greater powers of control than any current or medieval non-immortal dictator or monarchical dynasty?
Well actually I think you misunderstood me. The statement you’re basing your argument on is “Death isn’t a problem to be overcome it’s the natural conclusion to life.” I admit that I may have “imbued the word natural with moral weight”. However you are responding as if I had said “death is natural therefore it is desirable” which I did not and which would be a pretty meaningless statement to make. I merely used natural as an adjective, I could have used “inevitable” or “only” or many others instead. The adjective was obviously misplaced because it had unintended connotations in the context. I should have reread the comment more carefully.
In answer to your second point it depends when immortality was discovered. In the last example I said if the means to be immortal was discovered tomorrow. Obviously it is more likely that it would be discovered in hundreds or thousands of years when the world will probably be radically different to that of today. Therefore both are complete conjecture. Neither of us can know what the true impact on society would be. I think it will be negative for the reasons I’ve given, I’m not sure what your position is as you’ve not clearly stated it though I’m assuming you think the effects would be positive? It would be interesting to hear what your view s are.
Because they are able to maintain power for much longer. It is often when a dictator dies or is aging and infirm that their regimes are contested.
We don’t know what the true impact on society will be if medical immortality is discovered—but the thing is that the current impact on society of its lack is about 60 million deaths per year. A death toll of the scale of World War 2, every single year.
Can I condone such a death toll for reasons as uncertain as the fear of the possible formation of a immortal super-elite which will lead the rest of humanity to misery, or of the fear of overpopulation bringing misery, or of dictators lasting a bit longer in power than they otherwise would?
No, I can’t condone it. Yes, I’m sure lots of problems will arise if medical immortality is discovered. But as a rough calculation none of those problems is nearly certain enough to justify 60 million deaths per year in return. Your own calculations of this may be different.
--
As a sidenote your concept of just the elite becoming immortal isn’t an automatic dystopia either. If anything, I think that might make them a bit more responsible in evaluating the long-term consequences of their policies.
If a Victorian thinker had challenged the appropriateness of medical advances like open heart surgery on the grounds that the Earth was dangerously close to carrying capacity, would you be persuaded that medical researchers were selfish or misguided.
Putting it slightly differently, there’s not yet a compelling case that the current average human lifespan or carrying capacity of the planet are set in stone by physical laws. Human science has increased both those numbers many times throughout history.
If you want to make the argument that a counter-factual world with 3 billion humans, all living the American standard of living would be more moral than the current setup, that’s a respectable position. But there’s substantial difficulty getting from here to there. If we’re both wishing for pie-in-the-sky, doesn’t it seem more pleasant to wish for the world of 6 billion sustainably American standard of living, and trying to think of how to get to that outcome?
No because life saving procedures are a different matter to procedures that ensure immortality which would effectively cut the death rate in a hypothetical situation where everyone in the world had access to them. My point is I don’t think this would be sustainable/ it would lead to dire consequences for the human race. As I mentioned to Mitchell Porter I didn’t say that experimentation in this area should be prevented I just think that it is not a desirable road for humanity in the event of success.
1) I’m doubtful that the distinction between lifesaving procedures and immortality will end up being a clear distinction. I’m optimistic that humanity will eventually have the capability to do things like replace lost limbs with new limbs. Once we have that level of capacity, most of the modern causes of death go away—if you survive to reach the hospital, you’re likely to be able to leave basically good as new.
2) Given our current levels of technology, Western-level standards of living for everyone are not sustainable. Nor do there appear to be imminent technological advances that would make that sustainable. But radical life-extension technology (of whatever form) is also nowhere near imminent. Why do you think that the kind of technology advances you dislike are closer to achievement than carrying-capacity advances?
Some of your comments suggest that you would oppose carrying capacity increases (like colonizing other planets) even if they were within humanities capacity because these technological capacities would be bad for humanity. Assuming you are correct that these technological revolutions would fundamentally change human society, why are these hypothetical changes worse than the changes caused by developments like selective breeding of livestock, practical steam engines, cell phones, the Internet, or agriculture itself?
Wait I don’t think I said I “dislike” any technological advances. I’m not opposing the investigation into life preserving technology and I would be greatly impressed if a “cure for death” was discovered I just think that the effect on human life would ultimately be negative. I said the idea of colonising space would be depressing to me. By colonising space I don’t mean living on other planets as right now that is an impossibility, I mean living in space ships. This would in no way be comparable to living on planet earth and the psychological implications of remaining in an enclosed space for such a long period of time would be great. Even if these and other practical limitations could be overcome, I find this idea disagreeable on an emotional/ aesthetic level. I find the idea of leaving the beauty of the natural world in favour of a simulated reality within a spacecraft deeply sad. Particularly if this was a result of the irreparable destruction of planet earth rendering it uninhabitable for humans.
Regarding the question of technological revolutions you mention there have been many that have had an extremely negative impact on human society and the world itself, the most obvious being the utilisation of fossil fuels for energy sources. The examples you mention are pretty benign but in the case of agriculture: http://www.guardian.co.uk/global-development/2012/aug/26/food-shortages-world-vegetarianism?INTCMP=SRCH
Also : “Meat production accounts for about 5% of global CO2 emissions, 40% of methane emissions and 40% of various nitrogen oxides. If meat production doubles, by the late 2040s cows, pigs, sheep and chickens will be responsible for about half as much climate change impact as all the world’s cars, trucks and aircraft.”(Guardian) Human beings have engineered many impressive innovations in technology and science, unfortunately these have also had some terrible side effects that we need to overcome as soon as possible.
I find this idea disagreeable on an emotional/ aesthetic level.
Sure—but this sort of reaction is historically contingent—our culture could have developed such that you would feel differently. These sorts of judgments are very fluid over time—what the Victorians found aesthetic was different that what the Romans found aesthetic is different from us. This fluidity makes it very hard to tell when the judgments should be taken seriously. Whereas we know that almost all technological advances reduced poverty.
Even as a believer in AGW, I’m pretty confident that the Industrial Revolution (which started with coal and moved to oil) was a net benefit to human happiness. Separately, it wouldn’t surprise me at all if the were a near term rise in the incidence of vegetarianism in the West for food shortage reasons. (Food is a zero-sum game: There’s a finite amount of energy per time that Earth receives from the Sun. Every calorie spent digesting grass to build cow bone is a calorie that can’t sustain a human).
The best option is to embrace permanent death. The success of cryogenics or other life preserving technologies would be disastrous for humanity. Already population estimates for the near future cause panic about resources and sustainability, without further population increase due to a decrease in death rate. It would only be feasible if the birth rate was cut severely by imposing policies such as that in China. As procreation and cultivation of family life are seen as integral parts of the human experience these policies would probably be unsuccessful. Even if they were successful would this really be desirable? We should accept the transient, cyclical nature of life on this planet.
That’s the “relinquish” option. The human race would not adjust sensibly to the new situation, so better to just not go there. But the argument that humanity can’t restrain itself or coordinate its actions can be applied from the other side: The human race will never relinquish life extension, too many people want to live, so we just have to make it work.
If humanity had the self-confidence and self-respect to seek immortality en masse, then we really would try to solve the other problems. The social contract would be, rejuvenation only if you have no more children. Outer space would be the safety valve. We’d fix a new average lifespan and organize to achieve equilibrium and sustainability around thousand-year lifespans rather than hundred-year lifespans. Something would be done.
But only a handful of people have that drive. So whether technological life extension comes to pass, will not be decided by humanity as a whole deliberately tilting in one direction or another. I think it’s predestined that it will come to pass, because the desire to live is stronger than the willingness to die as a moral example. If you just look at the roles that those two impulses play in human psychology, there’s no comparison, the will to live is much much stronger. So, though the human race doesn’t have enough will-to-live to be actively prioritizing longevity, it does have enough will-to-live to not outlaw it either. Only a traumatically overpopulated society would associate extended lifespan with suffering strongly enough to outlaw it.
The larger context here is that the human race is decoding the secrets of nature—the gene and the brain—and knowledge is power. Historically it’s a short step from science to technology. We are crossing those thresholds, from knowledge to power, just as surely as we are crossing all those environmental and population thresholds. And radical life extension, the ability to keep a human being in a state of youth and health for unnaturally long periods, is almost just an incidental consequence of this. You need to contemplate something like the full range of lifeforms on Earth, and then add science-fiction ideas about robots, smart machines, artificial life, and artificial intelligences, and then imagine that you could become any one of those lifeforms, while having all those machines working for you … to get an inkling of what that power should make possible.
That power will come into the possession of a minority. It may be a minority numbering in the millions, a globally distributed technology-using class which then becomes the de-facto posthuman technocracy of Earth. And it would then be the cultural politics and other internal politics of that technocracy which decides what becomes of the rest of us. They might keep Earth as a nature preserve for old-style humans, policed to ensure that we don’t independently reinvent the technologies of power. They might create a social order which permits natural humans to join the techno-aristocracy, but only after they have shown a readiness to be responsible, and a psychological capacity to be responsible. Such a transition might even require neurological intervention, to ensure the presence in the new posthuman of whatever norms are deemed essential within the technocracy.
Depending on how concentrated power becomes, there is the potential for far more coercion. At one extreme, ultimate power is wielded by “unfriendly” artificial intelligences that care nothing for humanity and who just pave the earth with a new machine ecology. Or the summit of the technocracy might be a faction who use humanity for their own amusement, or a cult who have some highly specific posthuman ideal to which we will all be made to conform.
A humane transhuman order does seem to be a possibility, but there’s no getting around the fact that it would have to contain elites possessing powers even greater than the national-security elites of today’s superpowers, with their nuclear weapons and global surveillance networks. In particular, a neurocratic elite for whom human nature is transparent and manipulable is implied. The neurocrats might dominate, or they might just be one segment of the technocracy. But they would be extremely important, because their tools and knowledge would be responsible for perpetuating the values that give their civilization its stability and its defining tendencies.
So Laurie, I think that’s the big picture. The future likely contains some combination of artificial intelligence and transhuman neurotechnocracy, that will regulate the bounds and the directions of posthuman civilization. How stable that will be, whether there will be competing power centers, how the story of that civilization could unfold (because it too would have a life, career, and death, it wouldn’t be a timelessly stable order), are all highly contingent on what happens in our immediate future. At least you found your way to one of the places that might make a difference; LW could become a sort of finishing school for future AI programmers and neurocrats. I don’t think you’ll convince them to relinquish their future power, that just means that someone else will seize the throne. But maybe you can shape their thinking. Who knows, with time you might even join them.
I don’t know what the reproduction rate would look like if there were no aging. It’s very hard to tell what proportion of people like raising children, or whether people would want to keep raising children after the first few, or how much they’d want to keep raising children if they won’t need anyone to take care of them as they get old. Or for that matter, whether raising children would be so much easier with future tech that the prospect would be more attractive for more people.
I didn’t say that investigation into the possibilities of life lengthening should be prevented or inhibited. One of our most primal urges is to survive at any cost. Obviously human curiosity for what is possible will lead us into these areas of research. However I think that if these methods proved viable to the extent you suggest (1000 year life spans) this would ultimately be ruinous for the human race. I think you agree here as the situation you describe as a likely outcome is by no means desirable:
In the definition of the “relinquish” option it argues that the prevention of certain technological investigations is characteristic of a totalitarian society. While I agree with that I have to say that the society you describe sounds worse. So maybe if what you describe was really a likely outcome we should think about the value of restrictive measures now for preventing much bigger infringements of rights in the future. The prospects you describe as inevitable are deeply depressing: an elite group of humans enforcing the complete subjugation of the rest of the planet.
You also dismiss the problem of over population, lack of resources, and colonisation of space despite the fact that these issues are no where near being solved now. If it was possible to colonise space on a massive scale this again I feel would be completely undesirable. Instead of looking for contingency plans we should confront the problems of global warming lack of food and energy sources directly.
Essentially I think the desire for a prolonged life span is born out of fear of death, eternity and uncertainty. This is a natural human reaction to death but in the end even if we could push death further and further away we would still have to face it at some point. Most people don’t need an extra 900 years of life. I don’t think we should allow our selfishness to corrupt the natural continuation of the human race.
I find that to be a stunningly selfish and elitist point of view. How do you know, and why are you deciding for them?
How?By “most people” I’m implying no one I’m not suggesting there are some people who deserve extra life and some who don’t. Sorry if that isn’t clear where I’m from that would be implicitly understood.
My point was that you’re thinking that you know enough to decide that for everyone. You’re also bringing in a concept of need (need for what?) when I think the relevant question is whether people want to live longer.
And I’m including myself in that no one too obviously! I don’t know whether I need to make that explicit but I just realised you said selfish as well as elitist.
To my mind, the selfishness was that you thought you could reasonably make that sort of decision for the whole human race.
Ok I don’t think I would use the word selfish in that context- arrogant or misguided maybe- so I wasn’t sure what you meant. Anyway a lot of people when they are nearing the end of life say that they are happy to accept death, many very old people even wish for death. Obviously people have regrets but I think you would have these regardless of life span. I’m not disputing the fact that many people do wish for immortality or massively increased life span.
I brought up the question of need because of the possible negative impact that the invention of immortality could have on the human race and the rest of the world. If it is just a question of desire then the possible consequences have to be more closely scrutinised than if it actually served a purpose.
Oops just replied to myself but meant to reply to you.
It’s not subjugation if all they do is prevent independent outbreaks of destructive technology.
I can see that I described four forms of posthuman technocracy, that we could call wilderness, meritocracy, exploiter, and cult. (Don’t hesitate to employ better names if you think of them.) Wilderness and meritocracy are the ones that you quoted and I can’t view them as bad, in fact they could be positively utopian. We’re talking about a world which has solved the crisis created by technology, by arriving at a social and cultural form that can employ technology’s enormous potential constructively.
Such a world has to ensure that the crisis isn’t recreated by independent reinvention, and it has to regulate its own access to techno-power. The “policing” is for the first part, and “entry exams for the techno-aristocracy” is for the second part. Do you find the prospect of even that much structure depressing, or is it just that you think the dystopian possibilities would inevitably come true?
We should distinguish between any exacerbation of existing problems due to longevity (your first list), and the problems that already exist and would continue to exist even without extra longevity (your second list). Just so we have names for things, I’ll call the second type of problem a “sustainability problem”; a technological solution to a sustainability problem, a “sustainability technology”; the first type of problem, an “exacerbation problem”; and a technology implicated in an exacerbation problem, a “destabilizing technology”. So we can call, for example, solar cells a sustainability technology, and stem cells a destabilizing technology. (These are precisely the sort of ultra-broad categories that can lead you astray, because under certain circumstances, solar cells might be destabilizing and stem cells might be sustainability-enhancing. But I felt the need for some extra vocabulary.)
So I summarize this part of your position as follows: we should prioritize the solution of sustainability problems, and avoid exacerbation problems by staying away from destabilizing technologies.
There’s a large number of sub-issues we could explore here, but I see the central fact as the failure of relinquishment as anything more than a local tactic that buys time. Someone somewhere will develop the destabilizing technologies. Independent development can’t be prevented or “policed” except by someone already possessing similar powers. So ultimately, someone somewhere must learn how to live with personally having access to destabilizing technologies, even if these powers are locked away most of the time and only brought out in a crisis. The various “posthuman technocracies” that I sketched are speculations about social forms that don’t keep advanced technology completely sealed away, but which nonetheless have some sort of resilience.
Well, the moral and existential dimension of these discussions is hotly contested. I focused on politics and pragmatism above, but I’ll try to return to the other topics in the next round.
You didn’t say destructive technologies you said “technologies of power” implying that the technological and scientific domains in this hypothetical future world would be stifled or non existent in order to prevent humans grasping power for themselves and threatening the reign of this “post human technocracy”. This sounds like subjugation to me. You also say they would “decide what would become of the rest of us”. This also sounds like complete totalitarian domination of the “old style human beings”. Considering that the ideal world for me would be an egalitarian society in which everyone is afforded the same indispensable rights and liberties, where the power balance is roughly equal from person to person or where the leadership is transparent in its governing and is not afforded liberties others are not this does not sound like a utopian society.
I’m not backing the prevention of investigation into these “destabilising technologies” as you have christened them. I think that the expansion of knowledge in every domain is desirable though we should work out better ways of managing the consequences of such knowledge in some cases. But I am kind of confused as to why you think that relinquishment now is unacceptable (not that i’m advocating that it is acceptable) but that in the future you would welcome the idea that humans would be policed in order to prevent the development of technologies that could threaten the existence of the elite post humans or any independent innovation. Isn’t that analogous to a hypothetical example of present day humans policing or preventing technologies that could threaten their existence in the future?However you reject this strategy as relinquishment. If you think this is an unsustainable method now then why do you think it will be a viable solution in the future? Do you think that it would be more acceptable because it would be easier to enforce in a more strictly policed and less tolerant society. Do you object to “relinquishment” on an ideological basis or a pragmatic basis?
Also I’m not sure we need the distinction between the two kinds of problems: global warming, lack of resources (food/ energy) and over population are all problems that would continue to exist without longevity and would be exacerbated by longevity. Colonisation of space is a separate issue, it isn’t a problem as such except in that it is not currently possible. So some may find it problematic in this respect.
I didn’t just mean political power or power over people. Consider the “power” to fly to another country or to talk with someone on the other side of the world. Science and technology produce a lot of powers like that. The advance of knowledge to the point that you could rejuvenate a person or revive someone from cryonics implies an enormous leap in that sort of power.
Total relinquishment doesn’t work because it requires absolutely everyone else to be persuaded of your view. If just one country opts out and keeps doing R&D, then relinquishment fails. But a society where advanced technology does already exist has some chance of controlling where and how it gets reinvented. Such a society could become overtly hierarchical and have no notion of equal rights.
But even a society with deep egalitarian values would need to find a way to assimilate these powers, rather than just renounce them, to remain in control of its own destiny. Otherwise it risks waking up one day to find that a different set of values are in charge, or just that someone played with matches and burned down the house.
One irony of technological power is that it offers ways for egalitarian values to survive, even in deeply destabilizing circumstances. A theme of the modern world is the fear of homemade WMDs. As knowledge of chemistry, biology, nanotechnology, robotics… advances, it becomes increasingly easy for an isolated lab to cook up a doomsday device. One response to that would be to just strip 99% of humanity of the power and the right to make a technology lab. The posthuman technocrats live in orbit and monitor the Earth to see that no-one’s in a cave reinventing rocketry, and everyone else goes back to hunting and gathering. Luddism and transhumanism reach a compromise.
Hopefully you can see that a luddite world with a small transhuman elite is more stable than a purely luddite world. The hunter-gatherers can’t do much about it if someone restarts the industrial revolution. This is why total reliquishment is impractical.
But what about transhuman egalitarianism? Is that workable? If anyone can make a doomsday lab, won’t it all come crashing down? This is why there’s a “neuro” in “neuro-technocracy”. Advanced technology’s doomsday problem is mostly due to malice (I want to destroy) or carelessness (I’m playing with matches). Malice and carelessness are psychological states. So are benevolence and competent caution. Human society already has a long inventory of ways, old and new, in which it tries to instil benevolence and competence in its members, in which it watches for sociopathy and mental breakdown, and tries to deal with such problems when they arise.
In a society with truly profound neuroscientific knowledge, all of that will be amplified too. It should be possible to make yourself smarter or more moral by something akin to increasing the density of neural connections in the relevant parts of your brain. I don’t mean that neural connectedness is seriously the answer to everything, it’s just a concrete example for the purposes of discussion. The idea is that there are properties of your brain which have an impact on the traits that determine whether you can or can’t be trusted with the keys to advanced technology.
For a culture that has absorbed advanced technology, neuroscience and neurotechnology can become one of the ways that it protects and propagates itself, alongside more traditional methods like education and socialization. This is yet another futurist frontier where there’s an explosion of possibilities which we hardly have the concepts to discuss. We still need to work through the cartoon examples, like spacepeople and cavepeople, in order to get a feel for how it could function, before we try to be more realistic.
So let’s go back to the scenario where we have high technology in orbit and a new stone age on Earth. I set that up as an example of a nonegalitarian but stable scenario. Now let’s modify the details a bit. What if stoners and spacers have a shared culture and it’s a matter of choice where you reside and how you live? All a stoner has to do is go to the local communication monolith and say, I want to join the spacers. As a member of solar-system civilization, they will have access to all those technologies that are forbidden on Earth. So a condition of solar citizenship might be, a regular and ongoing neuropsychological examination and tune-up, to ensure that bad or stupid impulses aren’t developing. Down on Earth they can be a little more lax, but solar society is full of dangerously powerful technologies, and so it’s part of the social contract that everyone stays morally and intellectually calibrated, to a degree that is superhuman by present-day standards. And when someone migrates from the stone age to the space age, it’s a condition of entry that they adopt space-age standards of character and behavior, and the personal practices which protect those standards.
That’s the cartoon version of benevolent space-age neurotechnocracy. :-)
That is a rather unpopular view here. The common position is one along the lines of “People should be able to live for as long as they wish.” And the response to concerns of overpopulation is often “Let’s hurry up and get ready to colonize space.” Or “Computer emulations would be a lot cheaper.” Or something along those lines.
Basically, just because immortality would be problematic in some respects, it doesn’t mean that we have to consider the systematic ending of human life to be an acceptable state of affairs, and it doesn’t mean we shouldn’t look for solutions to the death and overpopulation problem as a whole.
Isn’t good to have a multiplicity of viewpoints presented?
I find this point of view unsurprising as it reflects the greed and selfishness instilled in the population of the over indulgent western world today. Everything should be for sale, we should be able to have whatever we want, even though the consequences for the rest of humanity, the other species on this planet and the environment could be devastating. Death isn’t a problem to be overcome it’s the natural conclusion to life.
“Selfish” is more typically attached to those people who are okay with other people dying; not the people who are not okay with it.
I don’t know of any person here who wants immortality only for themselves and says to hell with everyone else. I suggest you actually read up on the actual views of the people in this community rather than strawmanning and making caricatures out of them.
You’re imbuing a moral quality to the word “natural” which isn’t actually there. By the same argument earthquakes and tsunamis are also not problems to be overcome, they’re the natural result of tectonic movement.
Nor is starvation a problem to be overcome, it’s the natural conclusion to the lack of sufficient food. Nor is infant mortality a problem to be overcome—after all women can make babies once every nine months, so it’s natural for so many infants to perish.
All problems to be overcome are “natural”. Until they’re solved, at which point they’re no longer natural, their solution is.
Selfish because if everyone on this planet chose to be immortal and continued to reproduce life on earth would be unsustainable unless major innovations in relation to problems such as the ones mentioned previously were realised. Even if they were the quality of life would inevitably be lower and eventually the human race would die out if the birth rate exceeded the death rate by such massive numbers. If reproduction was stopped it may be feasible but this would probably have to be controlled and enforced which I don’t agree with due to the reasons stated earlier.
I wasn’t actually addressing everyone in this community I was replying to a comment from one individual.
Maybe you’re right about that, although you’re stretching it a bit with the infant mortality argument. And I don’t think tsunamis are problems to be overcome I think we can only deal with the consequences. Starvation in many places is a problem due to over population which would only be exacerbated if people stopped dying. And I do think that quest for immortality is ultimately a selfish goal. When considering immortality do you honestly think “it would be great if i could live forever” or “it would be great if me and everyone on the planet now could live forever”? Maybe you do think the latter but I think if immortality was discovered tomorrow it would be concentrated in the hands of a rich elite who judge their lives to be more important than the rest of ours. After a while it would be sold to those wealthy enough to afford it. Just another way for the wealthy elitists to conserve their power. These people would control the world through the generations and decide their own world order. This would help to cause the stagnation of political ideas, social change and innovation.
Really? It was the chief method of population-control once upon a time, much like death by aging is now. They seem pretty analogous in most ways.
People were “selfish” back then to not want their infant babies die. People are the same sort of selfish now to not want to see their parents die.
Lack of death in both cases causes the same sorts of problems, but people adjust to problems. Fertility declined after infant mortality dropped—fertility per year will also be declined in people have their youthful years extended indefinitely.
Do you understand that you just made two contradictory arguments—before you said there will be overpopulation, because it will be given to all. Now you say it will be given only to some (so there’s no problem of overpopulation), but these few will create an elite.
Those are two opposite problems—which one do you believe will be the actual case?
How does that follow? In what way does medical immortality give these people greater powers of control than any current or medieval non-immortal dictator or monarchical dynasty?
Well actually I think you misunderstood me. The statement you’re basing your argument on is “Death isn’t a problem to be overcome it’s the natural conclusion to life.” I admit that I may have “imbued the word natural with moral weight”. However you are responding as if I had said “death is natural therefore it is desirable” which I did not and which would be a pretty meaningless statement to make. I merely used natural as an adjective, I could have used “inevitable” or “only” or many others instead. The adjective was obviously misplaced because it had unintended connotations in the context. I should have reread the comment more carefully.
In answer to your second point it depends when immortality was discovered. In the last example I said if the means to be immortal was discovered tomorrow. Obviously it is more likely that it would be discovered in hundreds or thousands of years when the world will probably be radically different to that of today. Therefore both are complete conjecture. Neither of us can know what the true impact on society would be. I think it will be negative for the reasons I’ve given, I’m not sure what your position is as you’ve not clearly stated it though I’m assuming you think the effects would be positive? It would be interesting to hear what your view s are.
Because they are able to maintain power for much longer. It is often when a dictator dies or is aging and infirm that their regimes are contested.
We don’t know what the true impact on society will be if medical immortality is discovered—but the thing is that the current impact on society of its lack is about 60 million deaths per year. A death toll of the scale of World War 2, every single year.
Can I condone such a death toll for reasons as uncertain as the fear of the possible formation of a immortal super-elite which will lead the rest of humanity to misery, or of the fear of overpopulation bringing misery, or of dictators lasting a bit longer in power than they otherwise would?
No, I can’t condone it. Yes, I’m sure lots of problems will arise if medical immortality is discovered. But as a rough calculation none of those problems is nearly certain enough to justify 60 million deaths per year in return. Your own calculations of this may be different.
--
As a sidenote your concept of just the elite becoming immortal isn’t an automatic dystopia either. If anything, I think that might make them a bit more responsible in evaluating the long-term consequences of their policies.
To ask the natural followup question:
If a Victorian thinker had challenged the appropriateness of medical advances like open heart surgery on the grounds that the Earth was dangerously close to carrying capacity, would you be persuaded that medical researchers were selfish or misguided.
Putting it slightly differently, there’s not yet a compelling case that the current average human lifespan or carrying capacity of the planet are set in stone by physical laws. Human science has increased both those numbers many times throughout history.
If you want to make the argument that a counter-factual world with 3 billion humans, all living the American standard of living would be more moral than the current setup, that’s a respectable position. But there’s substantial difficulty getting from here to there. If we’re both wishing for pie-in-the-sky, doesn’t it seem more pleasant to wish for the world of 6 billion sustainably American standard of living, and trying to think of how to get to that outcome?
No because life saving procedures are a different matter to procedures that ensure immortality which would effectively cut the death rate in a hypothetical situation where everyone in the world had access to them. My point is I don’t think this would be sustainable/ it would lead to dire consequences for the human race. As I mentioned to Mitchell Porter I didn’t say that experimentation in this area should be prevented I just think that it is not a desirable road for humanity in the event of success.
Two points:
1) I’m doubtful that the distinction between lifesaving procedures and immortality will end up being a clear distinction. I’m optimistic that humanity will eventually have the capability to do things like replace lost limbs with new limbs. Once we have that level of capacity, most of the modern causes of death go away—if you survive to reach the hospital, you’re likely to be able to leave basically good as new.
2) Given our current levels of technology, Western-level standards of living for everyone are not sustainable. Nor do there appear to be imminent technological advances that would make that sustainable. But radical life-extension technology (of whatever form) is also nowhere near imminent. Why do you think that the kind of technology advances you dislike are closer to achievement than carrying-capacity advances?
Some of your comments suggest that you would oppose carrying capacity increases (like colonizing other planets) even if they were within humanities capacity because these technological capacities would be bad for humanity. Assuming you are correct that these technological revolutions would fundamentally change human society, why are these hypothetical changes worse than the changes caused by developments like selective breeding of livestock, practical steam engines, cell phones, the Internet, or agriculture itself?
I’m not sure that regrowing limbs is much like rejuvenation. Most people die of aging, not accidents.
Wait I don’t think I said I “dislike” any technological advances. I’m not opposing the investigation into life preserving technology and I would be greatly impressed if a “cure for death” was discovered I just think that the effect on human life would ultimately be negative. I said the idea of colonising space would be depressing to me. By colonising space I don’t mean living on other planets as right now that is an impossibility, I mean living in space ships. This would in no way be comparable to living on planet earth and the psychological implications of remaining in an enclosed space for such a long period of time would be great. Even if these and other practical limitations could be overcome, I find this idea disagreeable on an emotional/ aesthetic level. I find the idea of leaving the beauty of the natural world in favour of a simulated reality within a spacecraft deeply sad. Particularly if this was a result of the irreparable destruction of planet earth rendering it uninhabitable for humans. Regarding the question of technological revolutions you mention there have been many that have had an extremely negative impact on human society and the world itself, the most obvious being the utilisation of fossil fuels for energy sources. The examples you mention are pretty benign but in the case of agriculture: http://www.guardian.co.uk/global-development/2012/aug/26/food-shortages-world-vegetarianism?INTCMP=SRCH Also : “Meat production accounts for about 5% of global CO2 emissions, 40% of methane emissions and 40% of various nitrogen oxides. If meat production doubles, by the late 2040s cows, pigs, sheep and chickens will be responsible for about half as much climate change impact as all the world’s cars, trucks and aircraft.”(Guardian)
Human beings have engineered many impressive innovations in technology and science, unfortunately these have also had some terrible side effects that we need to overcome as soon as possible.
Sure—but this sort of reaction is historically contingent—our culture could have developed such that you would feel differently. These sorts of judgments are very fluid over time—what the Victorians found aesthetic was different that what the Romans found aesthetic is different from us. This fluidity makes it very hard to tell when the judgments should be taken seriously. Whereas we know that almost all technological advances reduced poverty.
Even as a believer in AGW, I’m pretty confident that the Industrial Revolution (which started with coal and moved to oil) was a net benefit to human happiness. Separately, it wouldn’t surprise me at all if the were a near term rise in the incidence of vegetarianism in the West for food shortage reasons. (Food is a zero-sum game: There’s a finite amount of energy per time that Earth receives from the Sun. Every calorie spent digesting grass to build cow bone is a calorie that can’t sustain a human).
I would rather be immortal and limited to two children for all of my immortal lifespan than die.