The Personal Implications of AGI Realism

Superintelligence Is On The Horizon

It’s widely accepted that powerful general AI, and soon after, superintelligence, may eventually be created.[1] There’s no fundamental law keeping humanity at the top of the intelligence hierarchy. While there are physical limits to intelligence, we can only speculate about where they lie. It’s reasonable to assume that even if we hit an S-curve in progress, that plateau will be far beyond anything even 15 John von Neumann clones could imagine.

Gwern was one of the first to recognise the “scaling hypothesis”; others followed later. While debate continues over whether scaling alone will lead to AI systems capable of self-improvement, it seems likely that scaling, combined with algorithmic progress and hardware advancements, will continue to drive progress for the foreseeable future. Dwarkesh Patel estimates a “70% chance scaling + algorithmic progress + hardware advances will get us to AGI by 2040″. These odds are too high to ignore. Even if there are delays, superintelligence is still coming.

Some argue it’s likely to be built by the end of this decade; others think it might take longer. But almost no one doubts that AGI will emerge this century, barring a global catastrophe. Even skeptics like Yann LeCun predict AGI could be reached in “years, if not a decade.” As Stuart Russell noted, estimates have shifted from “30-50 years” to “3-5 years.

Leopold Aschenbrenner calls this shift “AGI realism.” In this post, we focus on one key implication of this view—leaving aside geopolitical concerns:

We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do.

Of course, this could be wrong. AGI might not arrive until later this century, though this seems increasingly unlikely. Nevertheless, it’s a future we must still consider.

Even in a scenario where AGI arrives late in the century, many of us alive today will witness it. I was born in 2004, and it’s more probable than not that AGI will be developed within my lifetime. While much attention is paid to the technical, geopolitical, and regulatory consequences of short timelines, the personal implications are less often discussed.

All Possible Views About Our Lifetimes Are Wild

This title riffs on Holden Karnofsky’s post “All Possible Views About Humanity’s Future Are Wild.” In essence, either we build superintelligence—ushering in a transformative era—or we don’t. We may see utopia, catastrophe, or something in between. Perhaps geopolitical conflicts, like a war over Taiwan, will disrupt chip manufacturing, or an unforeseen limitation could prevent us from creating superhuman intelligence. Whatever the case, each scenario is extraordinary. Arguably, no view of our future is “tame.” There is no non-wild view.

Personally, I want to be there to witness whatever happens, even if it’s the cause of my demise. It seems only natural to want to see the most pivotal transition since the emergence of intelligent life on Earth. Will we succumb to Moloch? Or will we get our act together? Are we heading toward utopia, catastrophe, or something in between?

The changes described in Dario Amodei’s “Machines of Loving Grace” paint a picture of what a predominantly positive future of highly powerful AI systems could look like. As he says in the footnotes, his view may even be perceived as “pretty tame”:

“I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence.”

To be clear, what Dario describes as being perceived as “tame” already includes:

  • Potentially doubling the human life span.

  • The ability to greatly enhance human cognitive and emotional abilities, expanding the space of what is possible to experience.

  • Reliable prevention and cures for nearly all diseases.

  • The ability for people to have full control over their weight, physical appearance, reproduction, and other biological processes.

AI researcher Marius Hobbhahn speculates that the leap from 2020 to 2050 could be as jarring as transporting someone from the Middle Ages to modern-day Times Square, exposing them to smartphones, the internet, and modern medicine.

Or, as Leopold Aschenbrenner points out, we might see massive geopolitical turbulence.

Or, in Eliezer Yudkowsky’s view, we face near-certain doom.

Regardless of which scenario you find most plausible, one thing is abundantly clear: all possible views about our lifetimes are wild.

What Does This Mean On A Personal Level?

It’s dizzying to think that you might be alive when the 24th century comes crashing down on the 21st. If your probability of doom is high, you might be tempted to maximise risk—if you enjoy taking risks—since there would seem to be little to lose. However, I would argue that if there’s even a small chance that doom isn’t inevitable, the focus should be on self-preservation. Imagine getting hit by a truck just years or decades before the birth of superintelligence.

It makes sense to fully embrace your current human experience. Savor love, emotions—positive and negative—and other unique aspects of human existence. Be grateful. Nurture your relationships. Pursue things you intrinsically value. While future advanced AI systems might also have subjective experiences, for now, feeling is something distinctly human.

For better or for worse, no part of the human condition will remain the same after superintelligence. Biological evolution is slow, but technological progress has been exponential. The modern world itself emerged in the blink of an eye. If we survive this transition, superintelligence might bridge the gap between our biological limitations and technological capabilities.

The best approach, in my view, is to fully experience what it means to be human while minimising your risks. Avoid unnecessary dangers—reckless driving, water hazards, falls, excessive sun exposure, and mental health neglect. Look both ways when crossing the street. Focus on becoming as healthy as possible.[2]

This video provides a good summary of how to effectively reduce your risk of death.

Maybe reading science fiction – series like The Culture by Iain Banks– is a good way to prepare for what’s coming.[3] Alternatively, some may prefer to stay grounded in present reality, knowing that the second half of this century might outpace even the wildest sci-fi. In ways we can’t fully predict, the future could be stranger than anything we imagine.

Holden Karnofsky has described a “call to vigilance” when thinking about the most important century. Similarly, I believe we should all adopt this mindset when considering the personal implications of AGI. The right reaction isn’t to dismiss this as hype or far-off sci-fi. Instead, it’s the realisation: “…oh… wow… I don’t know what to say, and I think I might vomit… I need to sit down and process this.”

To conclude:

Utopia is uncertain, doom is uncertain, but radical, unimaginable change is not.

We stand at the threshold of possibly the most significant transition in the history of intelligence on Earth—and maybe our corner of the universe.

Each of us must find our own way to live meaningfully in the face of such uncertainty, possibility, and responsibility.

We should all live more intentionally and understand the gravity of the situation we’re in.

It’s worth taking the time to seriously and viscerally consider how to live in the years or decades leading up to the dawn of superintelligence.

  1. ^

    For the purpose of this post, we’ll abide by the definition in DeepMind’s paper “Levels of AGI for Operationalizing Progress on the Path to AGI”.

  2. ^

    Maybe you could argue getting maximally healthy isn’t *that* important, as in a best-case scenario for superintelligence, nearly all diseases and ailments would be solved. But still, it probably makes sense to hedge for mega-long timelines and stay healthy.

  3. ^
No comments.