Do not miss the cutoff for immortality! There is a probability that you will live forever as an immortal superintelligent being and you can increase your odds by convincing others to make achieving the technological singularity as quickly and safely as possible the collective goal/project of all of humanity, Similar to “Fable of the Dragon-Tyrant.”
Do you think the technological-singularity (AI intelligence-explosion) will occur in our lifetimes and we will one day be immortal, hyper-intelligent, god-like beings? Or do you think we will miss the cutoff for immortality? Imagine 14 billion years of the universe existing, of complex systems of molecules getting exponentially more and more complex, all leading to this moment, and then missing immortality by 200 years, or 20 years, or even 1 day!
This is my first-ever post on LessWrong. The purpose of this post is to seek the reader’s help in raising awareness about a situation that I consider to be the most important challenge facing humanity. However, obviously, for the reader to be on board I must first convince them of my beliefs. Therefore, in the following paragraphs, I’ll argue for my thesis. If you agree with the ideas I present, then hopefully you will be motivated to help get these ideas into the mainstream.
Note:
In this post, I will be assuming that the reader already believes that the technological singularity could reasonably occur in our lifetimes. If the reader does not share this view then realize that this is the abridged version of this post. I’ve also written a longer post on this topic that goes into more detail and is geared toward a more general audience where I present more evidence for the claims I assert. For instance, I devote a large section in the longer post to convincing people who either don’t know about the technological singularity or who are skeptics that it could indeed happen in our lifetimes. To see the full version of this longer post click the following link:https://www.reddit.com/user/Oliver—Klozoff/comments/14iemf6/dont_miss_the_cutoff_for_immortality_theres_a/?utm_source=share&utm_medium=web2x&context=3. Finally, If the reader objects to any part of the argument I’ve presented here, then I encourage them to read the complete post as I may address their concern therein. For example, in the linked post I also devote a large section to showing how AI research and development has a significant risk of apocalyptic outcomes or even human extinction, which is merely assumed to be true in the context of the present post.
Thesis Statement:
There is a significant probability that you will live forever as an immortal superintelligent being and you can increase your odds of this occurring by convincing others to make achieving the technological singularity as quickly and safely as possible the collective goal/project of all of humanity.
I Submit that we can become immortal godlike beings by creating superintelligent AI and asking it to make us immortal, and then asking it to make us superintelligent ourselves. This should be the number one priority and concern on everyone’s minds.
All this seems far-fetched but remember: all we as humans need to do is create an AI that can create an AI smarter than itself and an intelligence explosion will occur. We don’t need to invent superintelligent AI ourselves, just an AI that is about as smart as we are, and not in every domain, merely in the domain of advancing AI. An upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence. This event is called the technological singularity
Keep in mind that all humans who die before this event will miss the cutoff for immortality. We can limit the number of needless deaths before the cutoff for immortality by convincing the mainstream of this project/goal. Before I was a computer scientist, I was studying to be a molecular biologist and it’s my opinion that the complex interactions between all the tiny molecules that make up the molecular machine that is your body are far too complicated for humans alone to figure out to the degree needed to extend human life any time soon. Evolution is random, nothing is organized, it’s the most complex and terribly organized machine you can think of (extremely convoluted mechanisms of micro-RNA from seemingly unrelated parts of the genome for example). The only way to potentially achieve immortality in our lifetimes is through an AI intelligence explosion (technological singularity) that creates a super-intelligent being that we can ask to please make us immortal. All humans that are alive at the time of the intelligence explosion could, by basically begging this godly being to help us, achieve immortality through the sheer problem-solving might of a being inconceivably further along the spectrum of intelligence than us. An almost undefinably hard problem like human immortality may be trivial to such a being. We would be as proportionally dumb as ants are in comparison to humans as humans would be in comparison to such a being. The problems an ant faces are trivial to us, moving leaves, fighting termites. Imagine trying to even explain our problems to an ant. Imagine trying to teach an ant calculus. We are godlike beings compared to ants and can solve problems they can’t even comprehend of. The moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god.
Humans in the past had no chance of defeating death, they were born too soon in history to stand a chance. Yet they still clung to hope. There were quests for the holy grail, (a cup with powers that provides eternal youth or sustenance in infinite abundance). Gunpowder was discovered looking for the philosopher’s stone; also called the elixir of life, useful for rejuvenation and for achieving immortality. For many centuries, immortality was the most sought goal in alchemy. Isaac Newton died drinking mercury believing it to be the philosopher’s stone. Imagine what the majority of humans who ever existed would give to trade places with someone today and have this opportunity to make the cutoff for immortality. One should be doing everything in their power to not miss the cutoff for immortality! The human race is 200,000 years old. Most humans in the past had no chance. A human born 60,000 years ago had no chance. My grandfather was born in 1918, he had no chance. My Dad is old enough to probably not make it.But you have a chance! Those you love have a chance too! In the grand scheme of things, the universe is still very young. The entropy heat death of the universe is speculated to happen hundreds of trillions of years in the future. Even if we can’t find a way to escape entropy, hundreds of trillions of years is still a lot to miss out on. As pondered in the opening of this post, how tragic would it be for the universe to be leading up to the singularity over the course of 14 billion years for you to then miss immortality only by 200 years, or 20 years, or even 1 day, and therefore miss out on an adventure that could possibly take place trillions of years into the future? A hyperintelligent being given hundreds of trillions of years may even be able to escape the entropy heat death of the universe by drilling into other dimensions (or through other sci-fi means); so one might even be missing out on true immortality by missing the cutoff. Imagine a world in which eight billion people awoke to realize they or those they loved might die before death is defeated and eight billion people decided to do something about it. Our goal should be to limit the number of people who needlessly die before the cutoff.Such a goal seems like a worthy cause to unite all of humanity. This is one of the ideas I believe we need to get into the mainstream.
What percentage of humanity’s energy, intellectual work, and resources are being directly dedicated to this goal now?Almost no direct effort is being put toward this project. We are just progressing to it naturally. How many man-hours are being wasted on inconsequential things like TikTok and videogames? How much brainpower is being squandered on goals that won’t matter in a post-singularity world anyway? For example, climate change is a problem that will be able to be solved almost instantaneously through the technological singularity: a superintelligent being could merely release a bunch of self-replicating nanobots that convert carbon dioxide to oxygen. Of course, I understand that AI research and development has a significant risk of apocalyptic outcomes or even human extinction. So conversely, if the singularity goes poorly then either civilization will collapse and stop producing high levels of greenhouse gas anyway, or even worse, the planet will be so altered by cataclysmic events that any previous climate change becomes insignificant. Therefore, in either case, climate change will be irrelevant in the near future. Yet most humans think of climate change as the most pressing problem facing humanity; a problem that will affect humans thousands of years into the future. Instead of raising the cost of energy due to climate change-based concerns we should be using all energy available to us to get the initial conditions right for a successful transition into the post-singularity future. Climate change is only one of many examples of society caring about the wrong things. Time, energy, and effort should stop being wasted on other inconsequential pursuits.
As we get closer to the technological singularity I have no doubt that at some point the majority of humans will eventually be convinced of its importance. My argument will only get more and more valid as the years go by and we keep improving our intelligent machines. However, my concern is that the longer it takes humanity to be convinced then the lesser the benefit of humanity being convinced at all will be. For example, if humanity is only convinced to reach its full collaborative potential a year before the singularity would have happened anyway then there isn’t much marginal benefit that can be reaped. We may come to regret all the potential progress that was squandered. Then Future humans will look back on the pre-singularity society and think how stupid and shortsighted we were. They will regret how long it took all of humanity to make this collective project/goal its top priority and thereby limit the number of needless deaths before the cutoff for immortality. It took a while to convince humanity of climate change as well. Hopefully, climate change will serve as a case study of the benefits of starting early. If taking no action means it would ordinarily take 200 years to reach the singularity under the current societal conditions of near-complete ignorance of this goal, then perhaps humanity could cut that number down to 50 years if society can be convinced to correct the misuse of its resources and achieve its unrealized potential full rate of progress in this area. Then the singularity would occur in our lifetimes when it otherwise wouldn’t. It is worth considering that technological progress can be drastically accelerated when the world is properly motivated to do so, especially during life-or-death situations in which societies are intensely unified behind a common purpose. Consider the technological jump in weaponry and technology from 1940-1945 during World War Two: jet engines, aircraft carriers, assault rifles, ballistic missiles, spacecraft, radar, sonar, microwave ovens, atomic bombs, nuclear energy, the first antibiotic (Penicillin), the first electronic digital computers. This rate of improvement in wartime technology was possible because society was collectively motivated to stop wasting time on unimportant things and focus on the singular goal of winning the war.
It’s this collective purpose and fighting spirit that I hope humanity will one day have for the project. It’s important to realize thatthe possibility of achieving immortality and godhood through the singularity is only half of the argument for why humanity should take the next few decades very seriously. The other half of the argument is that humanity needs to work together to try and avoid apocalyptic outcomes like killer rogue AI, nuclear holocaust, or societal collapse in the years leading up to or during the technological singularity. In this way, the war metaphor from the previous paragraph is surprisingly appropriate to describe our situation. The consequences of us losing are certainly no less real than those of real war: we are fighting for our lives. Overall, I am actually quite pessimistic about the possible outcomes of the technological singularity. That is why I am dedicating my life to making sure this whole process goes well. Of course, some people might incorrectly conclude that the risks associated with AI research weaken my overall argument supporting a project to develop superintelligent AI because they pollute an otherwise optimistic expected outcome with more gloomy alternatives thereby making the whole endeavor less worth caring about. I hold the position that the possible civilization-ending outcomes from AI do not invalidate my appeal to make the project of achieving the singularity a global priority. Instead, the minefield of possible negative outcomes actually provides even more reason for humanity to take this seriously. After all, the higher the chance of AI destroying humanity the lower the chance of us becoming immortal superintelligent gods. If we do nothing, then we will continue to stumble into all these upcoming challenges unprepared and unready. Some people might say that it is better to slow down AI research, even to the point where the singularity takes thousands of years to eventually achieve, so that humanity can progress extremely safely in a highly controlled manner and maximize the probability of a successful transition into the singularity while minimizing AI extinction risks. There are several problems with this. Firstly, from the standpoint of a human alive today, it is preferable to take one’s chances with an attempt at reaching the singularity during one’s own lifetime even if it means that humanity is less prepared than it possibly could have been. The alternative is knowingly delaying the singularity so far into the future that it becomes certain that one will die of old age. Secondly, it is unwise to slow down AI progress too much because the pre-singularity state of humanity that we currently live in is mildly precarious in its own right because of nuclear weapons. The more time one waits before making an attempt on the singularity the greater the chance that nuclear war will occur at some point and ruin all of our technological progress at the last minute. Thirdly, given that the companies and governments that are creating AI are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, it can be reasoned that there is a lot of incentive for entities that are less morally scrupulous and less safety conscious to ignore AI research moratoriums designed to slow down the pace of progress. When you’re talking about creating AI that can make changes to itself and become superintelligent, it seems that we only have one chance to get the initial conditions right. It would be better to not inadvertently cede the technological advantage to irresponsible rogue entities as such entities should not be trusted with creating the conditions to initiate the singularity safely. Moreover, in order to make sure that nobody performs unauthorized AI research there would need to be a highly centralized world government that keeps track of all computers that could be used to create AI. With the current political state of the world even if the West managed to restrict unauthorized AI research it would be infeasible to control external entities in China or Russia. If we move too slowly and try and limit AI research in the West, then there is a higher probability China will overtake us in AI development and humanity may have to entrust them to safely navigate us into the singularity safely. Personally, if we are headed in that direction anyway then I would rather the West drive than be in the passenger seat for the journey. So this event is approaching us whether we want it to or not. We have no idea how long it will take us to create the conditions in which the singularity can occur safely, and our response to that shouldn’t be less research, it should be more research! I believe our best option is to attack this challenge head-on and put maximum effort into succeeding. The technological singularity seems to be either the path toward heaven or hell. I can’t really see how a middling outcome is possible. So there is everything to gain and everything to lose. We will only get one chance to make sure the transition into the singularity goes smoothly. This is why we need to all work together and try our best.
A common objection to immortality is “death is what gives life meaning” and it would be presumptuous or vain to want to cheat death as it is all we have ever known and it is human nature to die. Those supporting the pro-aging argument in this way object to my ideas by saying things like: “the shortness of human life is a blessing”. Nick Bostrom, a professor of Philosophy at Oxford, director of the Future of Humanity Institute, and someone who shares my view on superintelligent AI being the key to immortality, found it striking that those who defend this common objection often commit fallacies which, from experience, would not be expected of them in a different context. In response, Nick Bostrom wrote a short story called “The Fable of the Dragon Tyrant” that shows how strange the statement “death is what gives life meaning” is. For context, if you have not already done so you should watch this ten-minute-long animated video presenting “The Fable of the Dragon-Tyrant”:https://youtu.be/cZYNADOHhVY. If you prefer, you can also read the original story here:https://nickbostrom.com/fable/dragon.
The story is set in a reality in which humans lived forever naturally, but thousands of years ago (when all humans were hunter-gatherers) a dragon appeared and demanded that a proportion of the world’s people be randomly selected and brought to him every day for him to eat to keep his hunger satisfied. This dragon seemed utterly invincible to any technology possessed by hunter-gatherers. Some humans tried to fight back though, yet the invincibility of the dragon was only confirmed again and again. Scientists studied the scales the dragon shed but every test they conducted only further proved the invulnerability of his armor. Eventually attempts at defeating the dragon stopped and it was accepted as a fact of life. Institutions sprung up around the dragon, trains were built to make the delivery of people to the dragon as painless as possible, governments employed people to keep track of whom the dragon ate, and religions arose claiming to know what happens after people are eaten; claiming that people go to a better place. The dragon was seen as a necessary and even beautiful part of life. Almost unnoticeably however, over the millennia, the smartest humans made incremental progress in technology which compounded upon itself as ideas led to new ideas until they found themselves with technology that would have seemed like magic to people born mere generations ago. Slowly a renewed interest in defying the dragon emerged. A worldwide meeting was held where it was realized that it might be possible to produce a dragon-killing weapon in a handful of decades if they all worked together, though nothing could be guaranteed. The next morning, a billion people woke to realize they or those they loved might be sent to the dragon before the weapon was completed. Whereas before, active support for the anti-dragon cause had been limited, it now became the number one priority and concern on everyone’s mind. Thus started a great technological race against time. They got to work building the weapon all the while daily trains of people were being randomly selected and transported to the dragon. While once the trains were seen as part of life now they incensed the people of the world and motivated them to work harder on their project. Time, energy, and effort stopped being wasted on other inconsequential pursuits and the whole of humanity stood behind the goal of killing the dragon as soon as possible to save as many people as possible. Thirty years passed and the humans had seemingly succeeded, the weapon was mere hours away from being completed. Unfortunately, the father of one of the weapon’s designers was one of the unlucky people selected to board the last train to be sent to the dragon to be eaten. His son had worked for days without sleep hoping that the weapon would be completed a day earlier and his father would be saved but it was too late. He begged for the train to be stopped but it had been agreed that the trains would run until the last minute so as not to arouse the dragon’s suspicion and the weapon would have the best chance of working. His Dad was eaten and a few moments later the weapon struck the dragon killing him. The reign of terror was over but the cost had been enormous. As the son cried for his father he thought to himself, “had we started but one day earlier my father would not have died. This project should have been started years earlier than we did. So many need not have been killed by the dragon, had we but awoken from our acceptance of his horror sooner.” The good news however was that the rest of humanity had made it though and were now free of the villainous tyrant that was the dragon. They were free to exist without fear and to learn from their mistakes so they could grow wise; they were free to start living.
Even though that may have been a fantastical story about saving the world from a dragon, the situation that humanity finds itself in with regard to death doesn’t differ in any meaningful way from the situation described in the story. Would those who argue the pro-aging position against life-prolonging technologies in our world also argue that the pro-dragon stance is the best position for humans to take in this hypothetical story? Because it seems to me that those who are pro-dragon in such a scenario would be suffering from Stockholm syndrome; they are hostages sympathizing with their captor because it is all humanity has ever known. If the dragon killed a family member of theirs, would they still call it a blessing? In such a scenario I would obviously argue that humans should fight back against the dragon. The pro-aging argument that it is in our nature to die seems similarly confused to a pro-dragon stance that it is in our nature to be eaten by the dragon. Death is not inevitable through some mystic force; you only die because your genome wants you to. Your genome could keep you alive if it wanted to. For example, the jellyfish Turritopsis dohrnii has biological immortality; if your genome wanted humans could be set up in such a way that they do not age. However, it’s not in your genome’s interest to keep you alive forever because the genome doesn’t want the oldest generation to monopolize the resources. Instead, your genome wants the next generation to have an opportunity to allow their mutations to be tested so that the genome can continuously improve. That’s what is best for your genome. You are a conscious being, not a genome.
I find another barrier to convincing the mainstream of this project is that a lot of people don’t understand what the possible rewards actually entail if humanity succeeds in this goal. This misunderstanding leads to a fairly common objection to these ideas : “How boring would it be to live forever, that sounds horrible. It sounds like a curse.” To those that think this, firstly, it is important to note that one can still die if they want to in this immortality scenario, so if one really wants to they can revert themselves back to being a human after a few hundred years and die naturally. However, since this is such an important decision: wouldn’t it be wise to take a couple of thousand years or so to think it over? Perhaps more importantly, it must be understood that a hyper-intelligent AI could be able to completely understand the machine of molecules that make up our consciousness such that we could transfer our consciousness to a more malleable state that can be improved upon exponentially as well so that we could also become hyperintelligent gods. If one was going to make the decision not to be immortal and choose a natural death, then it would, at the very least, be wise to postpone making those decisions until they are hyperintelligent and inconceivably smarter than they are now. Of course, some people doubt that human consciousness could be transferred in such a way. I agree that if you were to merely scan your mind and build a copy of your consciousness on a computer that consciousness obviously wouldn’t be you. However, I still think it might be possible to transfer your consciousness into a more easily upgradable substrate as long as you do it in a way that maintains the original system of information that is that consciousness, instead of creating a copy of that system. Perhaps by slowly replacing one’s neurons one by one with nanobots that do the exact same things that biological neurons do (detect magnesium ion signals released by adjacent neurons and release ions of their own if the signal is above a certain threshold, make new connections, etc.). Would you notice if one neuron was replaced? Probably not. What if you kept replacing them one by one until every neuron was a nanobot? As long as the machine of information that is your consciousness is never interrupted I believe one would survive that transition. I think preserving the mechanism of consciousness is what’s important, not what the mechanism is made out of. Then once your mind is made from nanobots you can upgrade it to superintelligent levels, and you could switch substrate to something even better using a similar process. If it is possible for a digital system to be conscious then one could transfer their mind into that digital substrate in a similar way. In this way mind uploading could be survivable. But to be honest, humans shouldn’t even be thinking that far ahead.The most important thing is immortality, one you safely make the cutoff then you can relax for a few hundred years and think about these things if you want, but the immortality cutoff is the only thing humans should be thinking about. And if a hyperintelligent AI is not able to solve the problem of transferring human consciousness then it’s probably not possible anyway; let the AI worry about that. However, I have a feeling that it will not be a challenge for such a being. An almost undefinably hard problem like consciousness may be trivial to such a being. We would be as proportionally dumb as ants are in comparison to humans as humans would be in comparison to such a being. The problems an ant faces are trivial to us, moving leaves, fighting termites. Imagine trying to even explain our problems to an ant. Imagine trying to teach an ant calculus. Worrying about uploading human consciousness properly now is like an ant worrying about how to solve a calculus problem: leave that to a smarter being. Consider an ant’s consciousness compared to your consciousness right now. An ant’s consciousness (if it is even conscious at all) is very dim. The best thing that an ant can ever experience is that it might detect sugar as an input and feel a rudimentary form of excitement. An ant cannot even comprehend what it is missing out on. Imagine explaining to an ant the experience of being on psychedelic drugs while sitting on a beach and kissing the woman you love, or the experience of graduating from college with your friends. In the future, humans could be able to experience conscious states that they can’t even comprehend now. What needs to be understood is that immortality is not going to be life as you know it now but merely forever: millions or trillions of years of humans just stumbling around the earth, putting up with work, feeling depressed, being bored, watching tv. The human condition was evolutionarily designed so that dopamine and serotonin can make us feel depressed or lazy or happy during certain times. That’s what life is as a human: trying to be happy merely just existing, that’s why Buddhism was created. Even if a human could somehow live their entire life feeling the best possible ecstasy that it is possible for a human to experience it would be nothing compared to what a godlike being could experience. Those who say “I don’t want to be hyperintelligent or live forever I’d rather just die a human” are like ants deciding “I don’t want to experience being a human anyway, so I might as well just die in a few weeks as an ant”. An ant isn’t even capable of understanding that decision. If one can, one should at least wait until they are no longer an ant before making such important decisions. I would imagine that once becoming human they would think to themselves how lucky they are that they chose to become a human and they would reflect on how close they came from nearly making the wrong decision as an ant and essentially dying from stupidity.
It’s hard to exaggerate how much everything is about to change. Speculative sci-fi is as good as any prediction from me about what the far future will be like as such predictions are beyond human reasoning. In the future perhaps your brain could be a neutron star the size of a solar system and instead of using chemical interactions between molecules in the way a human brain operates, the system that it is built on could be based on the strong nuclear force so as to pack as much computational power into the smallest space. Or your neurons could be made from the stuff that makes up the stuff that makes up quarks instead of being made from cells. You could split your consciousness off into a trillion others, simulate a trillion realities, and then combine your consciousnesses again. Instead of communicating by typing and sending symbols to each other in this painfully slow way, we could be exchanging more data with each other than humanity has ever produced every single millisecond. Our consciousnesses could exist as swarms of self-replicating machines that colonize the universe. We could meet other hyperintelligent alien life that emerged from other galaxies. We could escape the entropy heat death of the universe by drilling into other dimensions. We could explore new realms, and join a pantheon of other immortal godlike interdimensional beings. Anything that happens after the technological singularity is impossible to predict as too much will change and mere humans cannot see that far ahead, which is why it is called a singularity, in the same way, that one cannot see the singularity of a black hole as it is past the event horizon. Humans shouldn’t even be thinking that far ahead anyway. All of their attention should be on making sure they don’t miss the cutoff for immortality as that is time-sensitive. Once one has achieved immortality they will have hundreds of trillions of years to think about other things.
I realize my thesis undoubtedly sounds outlandish but let me present the case as to why it shouldn’t be. Extraordinary claims require extraordinary evidence, but we are living in an objectively extraordinary time in history. The intelligences that constellations of molecules are producing are improving exponentially and at a surprisingly predictable rate. By considering the history of this change through time, one can see that we are undoubtedly in the middle of an exponential spike of change that has never occurred before in Earth’s history. It took 3.5 billion years for multicellular life to evolve, then only 600 million years to go from a single cell to a human. It took 200,000 years for humans to invent agriculture, but only a few thousand after that to invent the steam engine, and only 300 after that to develop computers. The internet only became popularized in the 1990s. I remember when my Mom’s Nokia cellphone had a liquid crystal display like an old calculator, and now I use my phone to watch 4k movies. Nobody mainstream was talking about AI until the 2010s. Cryptocurrency only took off in the mainstream a few years ago. These days AI seems to be able to perform novel miracles every week. The uniqueness of this time period in history is corroborated by observing graphs about almost any statistic about humanity (human population, worldwide GDP, number of scientists, etc. ) over the last 10,000 years: the graphs show a horizontal line near the bottom of the graph before exploding with the past few decades. We are in an exponential spike of change yet can’t appreciate it because of the timescale.
What’s more, humans have a tough time understanding the repercussions of exponential growth because they evolved to think linearly: e.g. “the deer is over there, moving at this speed, so to catch it I need to create an imaginary line to where it is going.” No hunter has ever had to deal with a dear that sped up exponentially as it moved along. And if a human ever did encounter such a deer, I guarantee that the human would be surprisingly bad at predicting where to be to meet the dear at a specific time. The Human Genome Project is a perfect example of an exponential technological increase (the amount of DNA mapped doubled every year, and the cost came down by half every year). But that meant that halfway through the 15 year project, only 1% of the human genome was mapped; so people called it a failure. But 1% is only seven doublings away from 100%, so seven years later it was completed. That exponential trend has continued and now you could have your entire genome mapped in 23 hours.
When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th-century progress and add it to the year 2000. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.
The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Instead, new technologies generally follow a sigmoid curve (s-curve).
You have a slow takeoff as the technology is invented and the major pain points get sorted out, and then you have this incredible explosion of growth as people find it useful and the technology gets better and better and better, and loads of competitors enter the market, and people find more and more and more real-world, practical uses for it. And there’s this massive race to make newer, better, bigger things that people keep making more stuff with. And then you reach the limit of what’s possible with that technology, and the rate of progress flattens out again. If you look only at very recent history, the part of the sigmoid you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1999 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was the growth spurt part of the sigmoid curve. But 2008 to 2022 was less groundbreaking, at least on the technological front. For years it’s felt like nothing has really changed since smartphones came along. There’s a reason very few people camp outside Apple Stores for the new iPhone anymore. Sure, technology produced an entirely new system of finance with cryptocurrency, but overall our lives are still pretty similar. However, in 2023 I think we’re at the startof a new sigmoid curve for AI. I believe the mainstream use of AI like ChatGPT, and the fact that there are self-driving cars in use in certain areas (like my former university), shows that we are at the start of the curve for AI, soon everything is about to change, just as fast and just as strangely as it did in the early 2000s, perhaps beyond all recognition.
From personal experience, I can attest that one reason for the reluctance for these ideas to be shared is that those who believe them appreciate that the ideas sound far-fetched to others. Even I think they sound far-fetched, and I believe them! The problem is that normal human skepticism was developed in a pre-singularity world, and has up until now served humanity well, but it is not used to dealing with certain conclusions that arise from a moment as unique in human history as the current situation and will thus falsely flag such conclusions as nonsense. Upon first hearing the thesis statement “if you take the right actions now there is a significant probability you could live forever as an immortal godlike being” a normal human’s instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either. The time graphs presented previously with their sudden spikes of progress in the past few decades indicate that in some ways the early stages of the technological singularity can be thought of as already having begun. Therefore it’s understandable that these ideas seem strange as the situation that is occurring, in reality, is a provably strange (unique) situation to be living through! Unfortunately for those attempting to share the ideas found in this post, another instinct a normal human is sure to have is: “This person is arguing that in the near future either humans could live forever as immortal godlike beings or alternatively an apocalypse could destroy the planet. If there’s one thing I know from history, it’s that everyone who has said stuff like that in the past has been crazy so this person is almost surely crazy too.” Unfortunately, the social price of being thought of as crazy (at least by some) for sharing these ideas is also a barrier to these ideas getting into the mainstream. One’s instincts to be weary of statements like my thesis will almost always be correct as in the vast majority of cases when you hear a statement that sounds like it came from an unhinged moron that’s because it actually has. So I don’t blame anyone for thinking that about me upon first reading my thesis statement, I would probably think it too. In fact, 99.9% of whenever I’ve told someone they could one day become a hyper-intelligent god-like being they understandably think I’m completely insane. However, realize that there are smart people like Nick Bostrom (a professor of philosophy at Oxford), Sam Harris (a public intellectual), and Ray Kurzweil (Google’s Director of Engineering) who all agree with me. That doesn’t mean I am right, but at least I hope I am in respectable enough company to not be immediately dismissed as an unhinged moron. Keep in mind my argument can only get more and more valid as the years go by and we keep improving our intelligent machines. And if I am correct about my thesis and reading this post causes you to take actions that lead to a future in which you make the cutoff for immortality when you otherwise wouldn’t then this post could be the most important thing you ever read so it is at least worth serious consideration.
If I’ve convinced you of my thesis over the course of this post, or if you already agreed with it to begin with, then I’d venture to say that you can see further than the average human. As some of the few humans that can see far enough ahead to see what is happening, we can have an inordinate impact if we act...or if we don’t act. There are many people that can’t see as well as us and they are counting on us to act. If the roles were reversed and I couldn’t see, then I’d hope that those that could see would do the same for me. Maybe we’ll make the cutoff for immortality and become literal omnipotent, omniscient, gods. In which case we have truly gained everything. Or maybe we’ll fail terribly and progress in AI will result in human extinction or some other unrecoverable global cataclysm that kills us all. In which case we’ve lost everything. The technological singularity seems to be either the path toward heaven or hell. We will only get one chance to make sure the transition into the singularity goes smoothly. And individually, we will each only have one chance to make the cutoff for immortality. But at least we have a chance! In the history of all the universe, we were lucky enough to be born immediately before the transition from carbon-based life to whatever comes next. We both find ourselves on the final team of humans representing Earth during the endgame moves before the singularity. It’s worth reflecting on the fact that it truly is just us on this planet. Nobody is coming to help us. It is the responsibility of those that can see what needs to be done to act and do it. If you can see, of course, you have a responsibility to those that can’t see. But you also have a responsibility to yourself: to not waste the advantages afforded to you and thereby one day find yourself on your deathbed looking back and realizing that you could have ended up with everything, but instead realizing you ended up with nothing because you didn’t act. Preferably, you want to look back and be proud of the fact that when there was everything to gain and everything to lose, you never gave up. Those are the sorts of conscious beings that deserve to be immortal gods. This is the endgame. We’ll all need to work together to succeed. To limit the number of people who needlessly die before the cutoff. To avoid apocalyptic extinction-level threats and possibly treacherous artificial godlike beings. And to gain types of joy that are far beyond human experiential or logical comprehension. I cannot think of a better cause to unite all of humanity.
In conclusion, Achieving the technological singularity and bringing about superintelligent AI is how we will kill the metaphorical dragon from “The Fable of the Dragon-Tyrant” and become immortal superintelligent gods ourselves! Make killing the Dragon-Tyrant the goal of humanity. As in the fable, what if your father dies one day before the dragon is defeated? Then you will have to live with the thought: “If only humanity had started on this collective goal but one day sooner.” Become a dragon-hunter: dedicate your life to slaying this figurative dragon. Study computer science and mathematics so that you can fight the dragon on the front lines. If you cannot do that then you can help build a society with a “wartime” economy to support those doing the “fighting”. At a minimum, one can help by spreading these ideas. Imagine running for president with immortality as one of the campaign goals! There is already a lot of discussion about the possible risks of AI in the mainstream but a corresponding discussion about the possible benefits of AI seems to be missing from the conversation.Almost nobody knows about these ideas, let alone is a proponent of them. For instance, most humans have never even heard of the technological singularity, most humans don’t realize that a chance at immortality is actually possible now. As in the Fable, the sooner all of humanity is convinced to make this project its top priority the more people we will be able to save.The timeline could be accelerated if enough people are convinced of the goal. Then the probability of you or your loved ones not missing the cutoff for immortality can be increased. Try your best. I will fight for you regardless though.
- Post compiled by: Oliver—Klozoff
Endnote for Christians (or possibly how to convince them):
These ideas do not conflict with Christianity.Even religions like Christianity should be supporting this as maybe the Christians were right all along, and the singularity is literally therevelationtalked about in the bible. I am an atheistbut if religious people believe that the technological singularity is the revelation and subsequent ascension into eternal heaven and godhood, which isn’t too far from the truth anyway, then that is all right by me as long as they help with the project.And if you are a Christian then consider this: The project of understanding Christianity is a continual process of discovery as theological scholars and philosophers strive to get ever closer to the truth. For example, the concept of the holy trinity wasn’t only understood until after centuries of philosophical thought. Did Christians in the past believe Earth was only 4000 years old: Yes. Did Christians in the past believe that evolution was heresy: Yes. Most Christians these days have changed their views yet are still Christian. They are Christians, yet their beliefs have been influenced by improvements in logic, math, science, and technology. So, was the bible wrong? Re-thinking Christianity is not a betrayal of unchanging truth. Christians need not identify with Creationism or Intelligent Design in order to see the magnificent achievements of modern science as a manifestation of the glory of creation rather than as a threat to faith. In John 10:34, Jesus defends himself against a charge of blasphemy by stating: “Have I not said that ye are gods?” Perhaps Jesus meant for us to become godlike ourselves and join him in heaven. Maybe he came to earth 2000 years ago to help set us on this path.And importantly the church has a lot of resources that could be used to help humanity in this project.But even if there is no relation between the technological singularity and Christianity,Christians should still push for the technological singularity because this is not a competing religion, I know because I am an atheist: I am not asking you to believe anything on faith; I am merely making a probabilistic argument. Furthermore, one can remain a Christian even after the technological singularity. If you remain a Christian, then you will still go to heaven whether you die in 50 years or 100 trillion years. So, you’d have nothing to lose either way if Christianity turns out to be valid, but you’d potentially have everything to lose if Christianity turns out to be false. Christians might claim to be one hundred percent certain that Christianity is valid anyway, but again, it would be wise to wait until you are superintelligent before making such claims given that one has nothing to lose in doing so as explained previously.
Do not miss the cutoff for immortality! There is a probability that you will live forever as an immortal superintelligent being and you can increase your odds by convincing others to make achieving the technological singularity as quickly and safely as possible the collective goal/project of all of humanity, Similar to “Fable of the Dragon-Tyrant.”
This is my first-ever post on LessWrong. The purpose of this post is to seek the reader’s help in raising awareness about a situation that I consider to be the most important challenge facing humanity. However, obviously, for the reader to be on board I must first convince them of my beliefs. Therefore, in the following paragraphs, I’ll argue for my thesis. If you agree with the ideas I present, then hopefully you will be motivated to help get these ideas into the mainstream.
Note:
In this post, I will be assuming that the reader already believes that the technological singularity could reasonably occur in our lifetimes. If the reader does not share this view then realize that this is the abridged version of this post. I’ve also written a longer post on this topic that goes into more detail and is geared toward a more general audience where I present more evidence for the claims I assert. For instance, I devote a large section in the longer post to convincing people who either don’t know about the technological singularity or who are skeptics that it could indeed happen in our lifetimes. To see the full version of this longer post click the following link: https://www.reddit.com/user/Oliver—Klozoff/comments/14iemf6/dont_miss_the_cutoff_for_immortality_theres_a/?utm_source=share&utm_medium=web2x&context=3. Finally, If the reader objects to any part of the argument I’ve presented here, then I encourage them to read the complete post as I may address their concern therein. For example, in the linked post I also devote a large section to showing how AI research and development has a significant risk of apocalyptic outcomes or even human extinction, which is merely assumed to be true in the context of the present post.
Thesis Statement:
There is a significant probability that you will live forever as an immortal superintelligent being and you can increase your odds of this occurring by convincing others to make achieving the technological singularity as quickly and safely as possible the collective goal/project of all of humanity.
I Submit that we can become immortal godlike beings by creating superintelligent AI and asking it to make us immortal, and then asking it to make us superintelligent ourselves. This should be the number one priority and concern on everyone’s minds.
All this seems far-fetched but remember: all we as humans need to do is create an AI that can create an AI smarter than itself and an intelligence explosion will occur. We don’t need to invent superintelligent AI ourselves, just an AI that is about as smart as we are, and not in every domain, merely in the domain of advancing AI. An upgradable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence. This event is called the technological singularity
Keep in mind that all humans who die before this event will miss the cutoff for immortality. We can limit the number of needless deaths before the cutoff for immortality by convincing the mainstream of this project/goal. Before I was a computer scientist, I was studying to be a molecular biologist and it’s my opinion that the complex interactions between all the tiny molecules that make up the molecular machine that is your body are far too complicated for humans alone to figure out to the degree needed to extend human life any time soon. Evolution is random, nothing is organized, it’s the most complex and terribly organized machine you can think of (extremely convoluted mechanisms of micro-RNA from seemingly unrelated parts of the genome for example). The only way to potentially achieve immortality in our lifetimes is through an AI intelligence explosion (technological singularity) that creates a super-intelligent being that we can ask to please make us immortal. All humans that are alive at the time of the intelligence explosion could, by basically begging this godly being to help us, achieve immortality through the sheer problem-solving might of a being inconceivably further along the spectrum of intelligence than us. An almost undefinably hard problem like human immortality may be trivial to such a being. We would be as proportionally dumb as ants are in comparison to humans as humans would be in comparison to such a being. The problems an ant faces are trivial to us, moving leaves, fighting termites. Imagine trying to even explain our problems to an ant. Imagine trying to teach an ant calculus. We are godlike beings compared to ants and can solve problems they can’t even comprehend of. The moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the basis of intelligence is, and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know, then we have to admit that we are in the process of building some sort of god.
Humans in the past had no chance of defeating death, they were born too soon in history to stand a chance. Yet they still clung to hope. There were quests for the holy grail, (a cup with powers that provides eternal youth or sustenance in infinite abundance). Gunpowder was discovered looking for the philosopher’s stone; also called the elixir of life, useful for rejuvenation and for achieving immortality. For many centuries, immortality was the most sought goal in alchemy. Isaac Newton died drinking mercury believing it to be the philosopher’s stone. Imagine what the majority of humans who ever existed would give to trade places with someone today and have this opportunity to make the cutoff for immortality. One should be doing everything in their power to not miss the cutoff for immortality! The human race is 200,000 years old. Most humans in the past had no chance. A human born 60,000 years ago had no chance. My grandfather was born in 1918, he had no chance. My Dad is old enough to probably not make it. But you have a chance! Those you love have a chance too! In the grand scheme of things, the universe is still very young. The entropy heat death of the universe is speculated to happen hundreds of trillions of years in the future. Even if we can’t find a way to escape entropy, hundreds of trillions of years is still a lot to miss out on. As pondered in the opening of this post, how tragic would it be for the universe to be leading up to the singularity over the course of 14 billion years for you to then miss immortality only by 200 years, or 20 years, or even 1 day, and therefore miss out on an adventure that could possibly take place trillions of years into the future? A hyperintelligent being given hundreds of trillions of years may even be able to escape the entropy heat death of the universe by drilling into other dimensions (or through other sci-fi means); so one might even be missing out on true immortality by missing the cutoff. Imagine a world in which eight billion people awoke to realize they or those they loved might die before death is defeated and eight billion people decided to do something about it. Our goal should be to limit the number of people who needlessly die before the cutoff. Such a goal seems like a worthy cause to unite all of humanity. This is one of the ideas I believe we need to get into the mainstream.
What percentage of humanity’s energy, intellectual work, and resources are being directly dedicated to this goal now? Almost no direct effort is being put toward this project. We are just progressing to it naturally. How many man-hours are being wasted on inconsequential things like TikTok and videogames? How much brainpower is being squandered on goals that won’t matter in a post-singularity world anyway? For example, climate change is a problem that will be able to be solved almost instantaneously through the technological singularity: a superintelligent being could merely release a bunch of self-replicating nanobots that convert carbon dioxide to oxygen. Of course, I understand that AI research and development has a significant risk of apocalyptic outcomes or even human extinction. So conversely, if the singularity goes poorly then either civilization will collapse and stop producing high levels of greenhouse gas anyway, or even worse, the planet will be so altered by cataclysmic events that any previous climate change becomes insignificant. Therefore, in either case, climate change will be irrelevant in the near future. Yet most humans think of climate change as the most pressing problem facing humanity; a problem that will affect humans thousands of years into the future. Instead of raising the cost of energy due to climate change-based concerns we should be using all energy available to us to get the initial conditions right for a successful transition into the post-singularity future. Climate change is only one of many examples of society caring about the wrong things. Time, energy, and effort should stop being wasted on other inconsequential pursuits.
As we get closer to the technological singularity I have no doubt that at some point the majority of humans will eventually be convinced of its importance. My argument will only get more and more valid as the years go by and we keep improving our intelligent machines. However, my concern is that the longer it takes humanity to be convinced then the lesser the benefit of humanity being convinced at all will be. For example, if humanity is only convinced to reach its full collaborative potential a year before the singularity would have happened anyway then there isn’t much marginal benefit that can be reaped. We may come to regret all the potential progress that was squandered. Then Future humans will look back on the pre-singularity society and think how stupid and shortsighted we were. They will regret how long it took all of humanity to make this collective project/goal its top priority and thereby limit the number of needless deaths before the cutoff for immortality. It took a while to convince humanity of climate change as well. Hopefully, climate change will serve as a case study of the benefits of starting early. If taking no action means it would ordinarily take 200 years to reach the singularity under the current societal conditions of near-complete ignorance of this goal, then perhaps humanity could cut that number down to 50 years if society can be convinced to correct the misuse of its resources and achieve its unrealized potential full rate of progress in this area. Then the singularity would occur in our lifetimes when it otherwise wouldn’t. It is worth considering that technological progress can be drastically accelerated when the world is properly motivated to do so, especially during life-or-death situations in which societies are intensely unified behind a common purpose. Consider the technological jump in weaponry and technology from 1940-1945 during World War Two: jet engines, aircraft carriers, assault rifles, ballistic missiles, spacecraft, radar, sonar, microwave ovens, atomic bombs, nuclear energy, the first antibiotic (Penicillin), the first electronic digital computers. This rate of improvement in wartime technology was possible because society was collectively motivated to stop wasting time on unimportant things and focus on the singular goal of winning the war.
It’s this collective purpose and fighting spirit that I hope humanity will one day have for the project. It’s important to realize that the possibility of achieving immortality and godhood through the singularity is only half of the argument for why humanity should take the next few decades very seriously. The other half of the argument is that humanity needs to work together to try and avoid apocalyptic outcomes like killer rogue AI, nuclear holocaust, or societal collapse in the years leading up to or during the technological singularity. In this way, the war metaphor from the previous paragraph is surprisingly appropriate to describe our situation. The consequences of us losing are certainly no less real than those of real war: we are fighting for our lives. Overall, I am actually quite pessimistic about the possible outcomes of the technological singularity. That is why I am dedicating my life to making sure this whole process goes well. Of course, some people might incorrectly conclude that the risks associated with AI research weaken my overall argument supporting a project to develop superintelligent AI because they pollute an otherwise optimistic expected outcome with more gloomy alternatives thereby making the whole endeavor less worth caring about. I hold the position that the possible civilization-ending outcomes from AI do not invalidate my appeal to make the project of achieving the singularity a global priority. Instead, the minefield of possible negative outcomes actually provides even more reason for humanity to take this seriously. After all, the higher the chance of AI destroying humanity the lower the chance of us becoming immortal superintelligent gods. If we do nothing, then we will continue to stumble into all these upcoming challenges unprepared and unready. Some people might say that it is better to slow down AI research, even to the point where the singularity takes thousands of years to eventually achieve, so that humanity can progress extremely safely in a highly controlled manner and maximize the probability of a successful transition into the singularity while minimizing AI extinction risks. There are several problems with this. Firstly, from the standpoint of a human alive today, it is preferable to take one’s chances with an attempt at reaching the singularity during one’s own lifetime even if it means that humanity is less prepared than it possibly could have been. The alternative is knowingly delaying the singularity so far into the future that it becomes certain that one will die of old age. Secondly, it is unwise to slow down AI progress too much because the pre-singularity state of humanity that we currently live in is mildly precarious in its own right because of nuclear weapons. The more time one waits before making an attempt on the singularity the greater the chance that nuclear war will occur at some point and ruin all of our technological progress at the last minute. Thirdly, given that the companies and governments that are creating AI are likely to perceive themselves as being in a race against all others, given that to win this race is to win the world, provided you don’t destroy it in the next moment, it can be reasoned that there is a lot of incentive for entities that are less morally scrupulous and less safety conscious to ignore AI research moratoriums designed to slow down the pace of progress. When you’re talking about creating AI that can make changes to itself and become superintelligent, it seems that we only have one chance to get the initial conditions right. It would be better to not inadvertently cede the technological advantage to irresponsible rogue entities as such entities should not be trusted with creating the conditions to initiate the singularity safely. Moreover, in order to make sure that nobody performs unauthorized AI research there would need to be a highly centralized world government that keeps track of all computers that could be used to create AI. With the current political state of the world even if the West managed to restrict unauthorized AI research it would be infeasible to control external entities in China or Russia. If we move too slowly and try and limit AI research in the West, then there is a higher probability China will overtake us in AI development and humanity may have to entrust them to safely navigate us into the singularity safely. Personally, if we are headed in that direction anyway then I would rather the West drive than be in the passenger seat for the journey. So this event is approaching us whether we want it to or not. We have no idea how long it will take us to create the conditions in which the singularity can occur safely, and our response to that shouldn’t be less research, it should be more research! I believe our best option is to attack this challenge head-on and put maximum effort into succeeding. The technological singularity seems to be either the path toward heaven or hell. I can’t really see how a middling outcome is possible. So there is everything to gain and everything to lose. We will only get one chance to make sure the transition into the singularity goes smoothly. This is why we need to all work together and try our best.
A common objection to immortality is “death is what gives life meaning” and it would be presumptuous or vain to want to cheat death as it is all we have ever known and it is human nature to die. Those supporting the pro-aging argument in this way object to my ideas by saying things like: “the shortness of human life is a blessing”. Nick Bostrom, a professor of Philosophy at Oxford, director of the Future of Humanity Institute, and someone who shares my view on superintelligent AI being the key to immortality, found it striking that those who defend this common objection often commit fallacies which, from experience, would not be expected of them in a different context. In response, Nick Bostrom wrote a short story called “The Fable of the Dragon Tyrant” that shows how strange the statement “death is what gives life meaning” is. For context, if you have not already done so you should watch this ten-minute-long animated video presenting “The Fable of the Dragon-Tyrant”: https://youtu.be/cZYNADOHhVY. If you prefer, you can also read the original story here: https://nickbostrom.com/fable/dragon.
The story is set in a reality in which humans lived forever naturally, but thousands of years ago (when all humans were hunter-gatherers) a dragon appeared and demanded that a proportion of the world’s people be randomly selected and brought to him every day for him to eat to keep his hunger satisfied. This dragon seemed utterly invincible to any technology possessed by hunter-gatherers. Some humans tried to fight back though, yet the invincibility of the dragon was only confirmed again and again. Scientists studied the scales the dragon shed but every test they conducted only further proved the invulnerability of his armor. Eventually attempts at defeating the dragon stopped and it was accepted as a fact of life. Institutions sprung up around the dragon, trains were built to make the delivery of people to the dragon as painless as possible, governments employed people to keep track of whom the dragon ate, and religions arose claiming to know what happens after people are eaten; claiming that people go to a better place. The dragon was seen as a necessary and even beautiful part of life. Almost unnoticeably however, over the millennia, the smartest humans made incremental progress in technology which compounded upon itself as ideas led to new ideas until they found themselves with technology that would have seemed like magic to people born mere generations ago. Slowly a renewed interest in defying the dragon emerged. A worldwide meeting was held where it was realized that it might be possible to produce a dragon-killing weapon in a handful of decades if they all worked together, though nothing could be guaranteed. The next morning, a billion people woke to realize they or those they loved might be sent to the dragon before the weapon was completed. Whereas before, active support for the anti-dragon cause had been limited, it now became the number one priority and concern on everyone’s mind. Thus started a great technological race against time. They got to work building the weapon all the while daily trains of people were being randomly selected and transported to the dragon. While once the trains were seen as part of life now they incensed the people of the world and motivated them to work harder on their project. Time, energy, and effort stopped being wasted on other inconsequential pursuits and the whole of humanity stood behind the goal of killing the dragon as soon as possible to save as many people as possible. Thirty years passed and the humans had seemingly succeeded, the weapon was mere hours away from being completed. Unfortunately, the father of one of the weapon’s designers was one of the unlucky people selected to board the last train to be sent to the dragon to be eaten. His son had worked for days without sleep hoping that the weapon would be completed a day earlier and his father would be saved but it was too late. He begged for the train to be stopped but it had been agreed that the trains would run until the last minute so as not to arouse the dragon’s suspicion and the weapon would have the best chance of working. His Dad was eaten and a few moments later the weapon struck the dragon killing him. The reign of terror was over but the cost had been enormous. As the son cried for his father he thought to himself, “had we started but one day earlier my father would not have died. This project should have been started years earlier than we did. So many need not have been killed by the dragon, had we but awoken from our acceptance of his horror sooner.” The good news however was that the rest of humanity had made it though and were now free of the villainous tyrant that was the dragon. They were free to exist without fear and to learn from their mistakes so they could grow wise; they were free to start living.
Even though that may have been a fantastical story about saving the world from a dragon, the situation that humanity finds itself in with regard to death doesn’t differ in any meaningful way from the situation described in the story. Would those who argue the pro-aging position against life-prolonging technologies in our world also argue that the pro-dragon stance is the best position for humans to take in this hypothetical story? Because it seems to me that those who are pro-dragon in such a scenario would be suffering from Stockholm syndrome; they are hostages sympathizing with their captor because it is all humanity has ever known. If the dragon killed a family member of theirs, would they still call it a blessing? In such a scenario I would obviously argue that humans should fight back against the dragon. The pro-aging argument that it is in our nature to die seems similarly confused to a pro-dragon stance that it is in our nature to be eaten by the dragon. Death is not inevitable through some mystic force; you only die because your genome wants you to. Your genome could keep you alive if it wanted to. For example, the jellyfish Turritopsis dohrnii has biological immortality; if your genome wanted humans could be set up in such a way that they do not age. However, it’s not in your genome’s interest to keep you alive forever because the genome doesn’t want the oldest generation to monopolize the resources. Instead, your genome wants the next generation to have an opportunity to allow their mutations to be tested so that the genome can continuously improve. That’s what is best for your genome. You are a conscious being, not a genome.
I find another barrier to convincing the mainstream of this project is that a lot of people don’t understand what the possible rewards actually entail if humanity succeeds in this goal. This misunderstanding leads to a fairly common objection to these ideas : “How boring would it be to live forever, that sounds horrible. It sounds like a curse.” To those that think this, firstly, it is important to note that one can still die if they want to in this immortality scenario, so if one really wants to they can revert themselves back to being a human after a few hundred years and die naturally. However, since this is such an important decision: wouldn’t it be wise to take a couple of thousand years or so to think it over? Perhaps more importantly, it must be understood that a hyper-intelligent AI could be able to completely understand the machine of molecules that make up our consciousness such that we could transfer our consciousness to a more malleable state that can be improved upon exponentially as well so that we could also become hyperintelligent gods. If one was going to make the decision not to be immortal and choose a natural death, then it would, at the very least, be wise to postpone making those decisions until they are hyperintelligent and inconceivably smarter than they are now. Of course, some people doubt that human consciousness could be transferred in such a way. I agree that if you were to merely scan your mind and build a copy of your consciousness on a computer that consciousness obviously wouldn’t be you. However, I still think it might be possible to transfer your consciousness into a more easily upgradable substrate as long as you do it in a way that maintains the original system of information that is that consciousness, instead of creating a copy of that system. Perhaps by slowly replacing one’s neurons one by one with nanobots that do the exact same things that biological neurons do (detect magnesium ion signals released by adjacent neurons and release ions of their own if the signal is above a certain threshold, make new connections, etc.). Would you notice if one neuron was replaced? Probably not. What if you kept replacing them one by one until every neuron was a nanobot? As long as the machine of information that is your consciousness is never interrupted I believe one would survive that transition. I think preserving the mechanism of consciousness is what’s important, not what the mechanism is made out of. Then once your mind is made from nanobots you can upgrade it to superintelligent levels, and you could switch substrate to something even better using a similar process. If it is possible for a digital system to be conscious then one could transfer their mind into that digital substrate in a similar way. In this way mind uploading could be survivable. But to be honest, humans shouldn’t even be thinking that far ahead. The most important thing is immortality, one you safely make the cutoff then you can relax for a few hundred years and think about these things if you want, but the immortality cutoff is the only thing humans should be thinking about. And if a hyperintelligent AI is not able to solve the problem of transferring human consciousness then it’s probably not possible anyway; let the AI worry about that. However, I have a feeling that it will not be a challenge for such a being. An almost undefinably hard problem like consciousness may be trivial to such a being. We would be as proportionally dumb as ants are in comparison to humans as humans would be in comparison to such a being. The problems an ant faces are trivial to us, moving leaves, fighting termites. Imagine trying to even explain our problems to an ant. Imagine trying to teach an ant calculus. Worrying about uploading human consciousness properly now is like an ant worrying about how to solve a calculus problem: leave that to a smarter being. Consider an ant’s consciousness compared to your consciousness right now. An ant’s consciousness (if it is even conscious at all) is very dim. The best thing that an ant can ever experience is that it might detect sugar as an input and feel a rudimentary form of excitement. An ant cannot even comprehend what it is missing out on. Imagine explaining to an ant the experience of being on psychedelic drugs while sitting on a beach and kissing the woman you love, or the experience of graduating from college with your friends. In the future, humans could be able to experience conscious states that they can’t even comprehend now. What needs to be understood is that immortality is not going to be life as you know it now but merely forever: millions or trillions of years of humans just stumbling around the earth, putting up with work, feeling depressed, being bored, watching tv. The human condition was evolutionarily designed so that dopamine and serotonin can make us feel depressed or lazy or happy during certain times. That’s what life is as a human: trying to be happy merely just existing, that’s why Buddhism was created. Even if a human could somehow live their entire life feeling the best possible ecstasy that it is possible for a human to experience it would be nothing compared to what a godlike being could experience. Those who say “I don’t want to be hyperintelligent or live forever I’d rather just die a human” are like ants deciding “I don’t want to experience being a human anyway, so I might as well just die in a few weeks as an ant”. An ant isn’t even capable of understanding that decision. If one can, one should at least wait until they are no longer an ant before making such important decisions. I would imagine that once becoming human they would think to themselves how lucky they are that they chose to become a human and they would reflect on how close they came from nearly making the wrong decision as an ant and essentially dying from stupidity.
It’s hard to exaggerate how much everything is about to change. Speculative sci-fi is as good as any prediction from me about what the far future will be like as such predictions are beyond human reasoning. In the future perhaps your brain could be a neutron star the size of a solar system and instead of using chemical interactions between molecules in the way a human brain operates, the system that it is built on could be based on the strong nuclear force so as to pack as much computational power into the smallest space. Or your neurons could be made from the stuff that makes up the stuff that makes up quarks instead of being made from cells. You could split your consciousness off into a trillion others, simulate a trillion realities, and then combine your consciousnesses again. Instead of communicating by typing and sending symbols to each other in this painfully slow way, we could be exchanging more data with each other than humanity has ever produced every single millisecond. Our consciousnesses could exist as swarms of self-replicating machines that colonize the universe. We could meet other hyperintelligent alien life that emerged from other galaxies. We could escape the entropy heat death of the universe by drilling into other dimensions. We could explore new realms, and join a pantheon of other immortal godlike interdimensional beings. Anything that happens after the technological singularity is impossible to predict as too much will change and mere humans cannot see that far ahead, which is why it is called a singularity, in the same way, that one cannot see the singularity of a black hole as it is past the event horizon. Humans shouldn’t even be thinking that far ahead anyway. All of their attention should be on making sure they don’t miss the cutoff for immortality as that is time-sensitive. Once one has achieved immortality they will have hundreds of trillions of years to think about other things.
I realize my thesis undoubtedly sounds outlandish but let me present the case as to why it shouldn’t be. Extraordinary claims require extraordinary evidence, but we are living in an objectively extraordinary time in history. The intelligences that constellations of molecules are producing are improving exponentially and at a surprisingly predictable rate. By considering the history of this change through time, one can see that we are undoubtedly in the middle of an exponential spike of change that has never occurred before in Earth’s history. It took 3.5 billion years for multicellular life to evolve, then only 600 million years to go from a single cell to a human. It took 200,000 years for humans to invent agriculture, but only a few thousand after that to invent the steam engine, and only 300 after that to develop computers. The internet only became popularized in the 1990s. I remember when my Mom’s Nokia cellphone had a liquid crystal display like an old calculator, and now I use my phone to watch 4k movies. Nobody mainstream was talking about AI until the 2010s. Cryptocurrency only took off in the mainstream a few years ago. These days AI seems to be able to perform novel miracles every week. The uniqueness of this time period in history is corroborated by observing graphs about almost any statistic about humanity (human population, worldwide GDP, number of scientists, etc. ) over the last 10,000 years: the graphs show a horizontal line near the bottom of the graph before exploding with the past few decades. We are in an exponential spike of change yet can’t appreciate it because of the timescale.
What’s more, humans have a tough time understanding the repercussions of exponential growth because they evolved to think linearly: e.g. “the deer is over there, moving at this speed, so to catch it I need to create an imaginary line to where it is going.” No hunter has ever had to deal with a dear that sped up exponentially as it moved along. And if a human ever did encounter such a deer, I guarantee that the human would be surprisingly bad at predicting where to be to meet the dear at a specific time. The Human Genome Project is a perfect example of an exponential technological increase (the amount of DNA mapped doubled every year, and the cost came down by half every year). But that meant that halfway through the 15 year project, only 1% of the human genome was mapped; so people called it a failure. But 1% is only seven doublings away from 100%, so seven years later it was completed. That exponential trend has continued and now you could have your entire genome mapped in 23 hours.
When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th-century progress and add it to the year 2000. It’s most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.
The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Instead, new technologies generally follow a sigmoid curve (s-curve).
You have a slow takeoff as the technology is invented and the major pain points get sorted out, and then you have this incredible explosion of growth as people find it useful and the technology gets better and better and better, and loads of competitors enter the market, and people find more and more and more real-world, practical uses for it. And there’s this massive race to make newer, better, bigger things that people keep making more stuff with. And then you reach the limit of what’s possible with that technology, and the rate of progress flattens out again. If you look only at very recent history, the part of the sigmoid you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1999 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was the growth spurt part of the sigmoid curve. But 2008 to 2022 was less groundbreaking, at least on the technological front. For years it’s felt like nothing has really changed since smartphones came along. There’s a reason very few people camp outside Apple Stores for the new iPhone anymore. Sure, technology produced an entirely new system of finance with cryptocurrency, but overall our lives are still pretty similar. However, in 2023 I think we’re at the start of a new sigmoid curve for AI. I believe the mainstream use of AI like ChatGPT, and the fact that there are self-driving cars in use in certain areas (like my former university), shows that we are at the start of the curve for AI, soon everything is about to change, just as fast and just as strangely as it did in the early 2000s, perhaps beyond all recognition.
Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.
From personal experience, I can attest that one reason for the reluctance for these ideas to be shared is that those who believe them appreciate that the ideas sound far-fetched to others. Even I think they sound far-fetched, and I believe them! The problem is that normal human skepticism was developed in a pre-singularity world, and has up until now served humanity well, but it is not used to dealing with certain conclusions that arise from a moment as unique in human history as the current situation and will thus falsely flag such conclusions as nonsense. Upon first hearing the thesis statement “if you take the right actions now there is a significant probability you could live forever as an immortal godlike being” a normal human’s instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either. The time graphs presented previously with their sudden spikes of progress in the past few decades indicate that in some ways the early stages of the technological singularity can be thought of as already having begun. Therefore it’s understandable that these ideas seem strange as the situation that is occurring, in reality, is a provably strange (unique) situation to be living through! Unfortunately for those attempting to share the ideas found in this post, another instinct a normal human is sure to have is: “This person is arguing that in the near future either humans could live forever as immortal godlike beings or alternatively an apocalypse could destroy the planet. If there’s one thing I know from history, it’s that everyone who has said stuff like that in the past has been crazy so this person is almost surely crazy too.” Unfortunately, the social price of being thought of as crazy (at least by some) for sharing these ideas is also a barrier to these ideas getting into the mainstream. One’s instincts to be weary of statements like my thesis will almost always be correct as in the vast majority of cases when you hear a statement that sounds like it came from an unhinged moron that’s because it actually has. So I don’t blame anyone for thinking that about me upon first reading my thesis statement, I would probably think it too. In fact, 99.9% of whenever I’ve told someone they could one day become a hyper-intelligent god-like being they understandably think I’m completely insane. However, realize that there are smart people like Nick Bostrom (a professor of philosophy at Oxford), Sam Harris (a public intellectual), and Ray Kurzweil (Google’s Director of Engineering) who all agree with me. That doesn’t mean I am right, but at least I hope I am in respectable enough company to not be immediately dismissed as an unhinged moron. Keep in mind my argument can only get more and more valid as the years go by and we keep improving our intelligent machines. And if I am correct about my thesis and reading this post causes you to take actions that lead to a future in which you make the cutoff for immortality when you otherwise wouldn’t then this post could be the most important thing you ever read so it is at least worth serious consideration.
If I’ve convinced you of my thesis over the course of this post, or if you already agreed with it to begin with, then I’d venture to say that you can see further than the average human. As some of the few humans that can see far enough ahead to see what is happening, we can have an inordinate impact if we act...or if we don’t act. There are many people that can’t see as well as us and they are counting on us to act. If the roles were reversed and I couldn’t see, then I’d hope that those that could see would do the same for me. Maybe we’ll make the cutoff for immortality and become literal omnipotent, omniscient, gods. In which case we have truly gained everything. Or maybe we’ll fail terribly and progress in AI will result in human extinction or some other unrecoverable global cataclysm that kills us all. In which case we’ve lost everything. The technological singularity seems to be either the path toward heaven or hell. We will only get one chance to make sure the transition into the singularity goes smoothly. And individually, we will each only have one chance to make the cutoff for immortality. But at least we have a chance! In the history of all the universe, we were lucky enough to be born immediately before the transition from carbon-based life to whatever comes next. We both find ourselves on the final team of humans representing Earth during the endgame moves before the singularity. It’s worth reflecting on the fact that it truly is just us on this planet. Nobody is coming to help us. It is the responsibility of those that can see what needs to be done to act and do it. If you can see, of course, you have a responsibility to those that can’t see. But you also have a responsibility to yourself: to not waste the advantages afforded to you and thereby one day find yourself on your deathbed looking back and realizing that you could have ended up with everything, but instead realizing you ended up with nothing because you didn’t act. Preferably, you want to look back and be proud of the fact that when there was everything to gain and everything to lose, you never gave up. Those are the sorts of conscious beings that deserve to be immortal gods. This is the endgame. We’ll all need to work together to succeed. To limit the number of people who needlessly die before the cutoff. To avoid apocalyptic extinction-level threats and possibly treacherous artificial godlike beings. And to gain types of joy that are far beyond human experiential or logical comprehension. I cannot think of a better cause to unite all of humanity.
In conclusion, Achieving the technological singularity and bringing about superintelligent AI is how we will kill the metaphorical dragon from “The Fable of the Dragon-Tyrant” and become immortal superintelligent gods ourselves! Make killing the Dragon-Tyrant the goal of humanity. As in the fable, what if your father dies one day before the dragon is defeated? Then you will have to live with the thought: “If only humanity had started on this collective goal but one day sooner.” Become a dragon-hunter: dedicate your life to slaying this figurative dragon. Study computer science and mathematics so that you can fight the dragon on the front lines. If you cannot do that then you can help build a society with a “wartime” economy to support those doing the “fighting”. At a minimum, one can help by spreading these ideas. Imagine running for president with immortality as one of the campaign goals! There is already a lot of discussion about the possible risks of AI in the mainstream but a corresponding discussion about the possible benefits of AI seems to be missing from the conversation. Almost nobody knows about these ideas, let alone is a proponent of them. For instance, most humans have never even heard of the technological singularity, most humans don’t realize that a chance at immortality is actually possible now. As in the Fable, the sooner all of humanity is convinced to make this project its top priority the more people we will be able to save. The timeline could be accelerated if enough people are convinced of the goal. Then the probability of you or your loved ones not missing the cutoff for immortality can be increased. Try your best. I will fight for you regardless though.
- Post compiled by: Oliver—Klozoff
Endnote for Christians (or possibly how to convince them):
These ideas do not conflict with Christianity. Even religions like Christianity should be supporting this as maybe the Christians were right all along, and the singularity is literally the revelation talked about in the bible. I am an atheist but if religious people believe that the technological singularity is the revelation and subsequent ascension into eternal heaven and godhood, which isn’t too far from the truth anyway, then that is all right by me as long as they help with the project. And if you are a Christian then consider this: The project of understanding Christianity is a continual process of discovery as theological scholars and philosophers strive to get ever closer to the truth. For example, the concept of the holy trinity wasn’t only understood until after centuries of philosophical thought. Did Christians in the past believe Earth was only 4000 years old: Yes. Did Christians in the past believe that evolution was heresy: Yes. Most Christians these days have changed their views yet are still Christian. They are Christians, yet their beliefs have been influenced by improvements in logic, math, science, and technology. So, was the bible wrong? Re-thinking Christianity is not a betrayal of unchanging truth. Christians need not identify with Creationism or Intelligent Design in order to see the magnificent achievements of modern science as a manifestation of the glory of creation rather than as a threat to faith. In John 10:34, Jesus defends himself against a charge of blasphemy by stating: “Have I not said that ye are gods?” Perhaps Jesus meant for us to become godlike ourselves and join him in heaven. Maybe he came to earth 2000 years ago to help set us on this path. And importantly the church has a lot of resources that could be used to help humanity in this project. But even if there is no relation between the technological singularity and Christianity, Christians should still push for the technological singularity because this is not a competing religion, I know because I am an atheist: I am not asking you to believe anything on faith; I am merely making a probabilistic argument. Furthermore, one can remain a Christian even after the technological singularity. If you remain a Christian, then you will still go to heaven whether you die in 50 years or 100 trillion years. So, you’d have nothing to lose either way if Christianity turns out to be valid, but you’d potentially have everything to lose if Christianity turns out to be false. Christians might claim to be one hundred percent certain that Christianity is valid anyway, but again, it would be wise to wait until you are superintelligent before making such claims given that one has nothing to lose in doing so as explained previously.
Sources:
“The Fable of the Dragon-Tyrant” short story by Nick Bostrom (a philosopher at the University of Oxford) https://nickbostrom.com/fable/dragon
The Fable of the Dragon-Tyrant video by CGPGrey (adapted from Nick Bostrom’s Paper):
https://www.youtube.com/watch?v=cZYNADOHhVY&ab_channel=CGPGrey
Can we build AI without losing control over it? | video by Sam Harris (Ted Talk):
https://www.youtube.com/watch?v=8nt3edWLgIg&t=1s&ab_channel=TED
The AI Revolution: The Road to Superintelligence blog post:
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
I tried using AI. It scared me. (everything is about to change) YouTube video by Tom Scott:
https://www.youtube.com/watch?v=jPhJbKBuNnA&t=145s&ab_channel=TomScott
AI timelines: What do experts in artificial intelligence expect for the future?
https://ourworldindata.org/ai-timelines
Center for AI Safety website
https://www.safe.ai/statement-on-ai-risk
Interesting Books:
“The Singularity Is Near: When Humans Transcend Biology” book by Ray Kurzweil (Google’s Director of Engineering)
“Superintelligence: Paths, Dangers, Strategies” book by Nick Bostrom
“The Singularity Is Nearer” book by Ray Kurzweil Expected Publication date: 2025.
Also, see my longer Reddit post...